Android跨進程通信IPC之9——Binder之Framework層C++篇2

Android跨進程通信IPC整體內(nèi)容如下

四、注冊服務

(一) 源碼位置:

framework/native/libs/binder/
  - Binder.cpp
  - BpBinder.cpp
  - IPCThreadState.cpp
  - ProcessState.cpp
  - IServiceManager.cpp
  - IInterface.cpp
  - Parcel.cpp

frameworks/native/include/binder/
  - IInterface.h (包括BnInterface, BpInterface)

/frameworks/av/media/mediaserver/
  - main_mediaserver.cpp

/frameworks/av/media/libmediaplayerservice/
  - MediaPlayerService.cpp

對應的鏈接為

(二)、概述

由于服務注冊會涉及到具體的服務注冊,網(wǎng)上大多數(shù)說的都是Media注冊服務,我們也說它。

media入口函數(shù)是 “main_mediaserver.cpp”中的main()方法,代碼如下:

frameworks/av/media/mediaserver/main_mediaserver.cpp      44行
int main(int argc __unused, char** argv)
{
    *** 省略部分代碼  *****
    InitializeIcuOrDie();
    // 獲得ProcessState實例對象 
    sp<ProcessState> proc(ProcessState::self());
    //獲取 BpServiceManager
    sp<IServiceManager> sm = defaultServiceManager();
    AudioFlinger::instantiate();
    //注冊多媒體服務
    MediaPlayerService::instantiate();
    ResourceManagerService::instantiate();
    CameraService::instantiate();
    AudioPolicyService::instantiate();
    SoundTriggerHwService::instantiate();
    RadioService::instantiate();
    registerExtensions();
    //啟動Binder線程池
    ProcessState::self()->startThreadPool();
    //當前線程加入到線程池
    IPCThreadState::self()->joinThreadPool();
 }

所以在main函數(shù)里面

  • 首先 獲得了一個ProcessState的實例
  • 其次 調(diào)用defualtServiceManager方法獲取IServiceManager實例
  • 再次 進行重要服務的初始化
  • 最后調(diào)用startThreadPool方法和joinThreadPool方法。
PS: (1)獲取ServiceManager:我們上篇文章講解了defaultServiceManager()返回的是BpServiceManager對象,用于跟servicemanger進行通信。

(三)、類圖

我們這里主要講解的是Native層的服務,所以我們以native層的media為例,來說一說服務注冊的過程,先來看看media的關系圖

media類關系圖.png

圖解

  • 藍色代表的是注冊MediaPlayerService
  • 綠色代表的是Binder架構(gòu)中與Binder驅(qū)動通信
  • 紫色代表的是注冊服務和獲取服務的公共接口/父類

(四)、時序圖

先通過一幅圖來說說,media服務啟動過程是如何向servicemanager注冊服務的。

注冊.png

(五)、流程介紹

1、inistantiate()函數(shù)
// MediaPlayerService.cpp     269行
void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(String16("media.player"), new MediaPlayerService());
}
  • 1 創(chuàng)建一個新的Service——BnMediaPlayerService,想把它告訴ServiceManager。然后調(diào)用BnServiceManager的addService的addService來向ServiceManager中添加一個Service,其他進程可以通過字符串"media.player"來向ServiceManager查詢此服務。
  • 2 注冊服務MediaPlayerService:由defaultServiceManager()返回的是BpServiceManager,同時會創(chuàng)建ProcessState對象和BpBinder對象。故此處等價于調(diào)用BpServiceManager->addService。
2、BpSserviceManager.addService()函數(shù)
/frameworks/native/libs/binder/IServiceManager.cpp   155行
virtual status_t addService(const String16& name, const sp<IBinder>& service,
        bool allowIsolated)
{
    //data是送到BnServiceManager的命令包
    Parcel data, reply; 
    //先把interface名字寫進去,寫入頭信息"android.os.IServiceManager"
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());   
    // 再把新service的名字寫進去 ,name為"media.player"
    data.writeString16(name);  
    // MediaPlayerService對象
    data.writeStrongBinder(service); 
     // allowIsolated= false
    data.writeInt32(allowIsolated ? 1 : 0);
    //remote()指向的BpServiceManager中保存的BpBinder
    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
    return err == NO_ERROR ? reply.readExceptionCode() : err;
}

服務注冊過程:向ServiceManager 注冊服務MediaPlayerService,服務名為"media.player"。這樣別的進程皆可以通過"media.player"來查詢該服務

這里我們重點說下writeStrongBinder()函數(shù)和最后的transact()函數(shù)

2.1、writeStrongBinder()函數(shù)
/frameworks/native/libs/binder/Parcel.cpp        872行
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}

里面調(diào)用flatten_binder()函數(shù),那我們繼續(xù)跟蹤

2.1.1、 flatten_binder()函數(shù)
/frameworks/native/libs/binder/Parcel.cpp        205行
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;

    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
     //本地Binder不為空
    if (binder != NULL) {
        IBinder *local = binder->localBinder(); 
        if (!local) {
            BpBinder *proxy = binder->remoteBinder();
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE; 
            obj.binder = 0; 
            obj.handle = handle;
            obj.cookie = 0;
        } else { 
            // 進入該分支
            obj.type = BINDER_TYPE_BINDER; 
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            obj.cookie = reinterpret_cast<uintptr_t>(local);
        }
    } else {
        ...
    }
    return finish_flatten_binder(binder, obj, out);
}

其實是將Binder對象扁平化,轉(zhuǎn)換成flat_binder_object對象

  • 對于Binder實體,則用cookie記錄binder實體的指針。
  • 對于Binder代理,則用handle記錄Binder代理的句柄。

關于localBinder,代碼如下:

//frameworks/native/libs/binder/Binder.cpp      191行
BBinder* BBinder::localBinder()
{
    return this;
}

//frameworks/native/libs/binder/Binder.cpp      47行
BBinder* IBinder::localBinder()
{
    return NULL;
}

上面 最后又調(diào)用了finish_flatten_binder()讓我們一起來看下

2.1.1、 finish_flatten_binder()函數(shù)
//frameworks/native/libs/binder/Parcel.cpp        199行
inline static status_t finish_flatten_binder(
    const sp<IBinder>& , const flat_binder_object& flat, Parcel* out)
{
    return out->writeObject(flat, false);
}
2.2、 transact()函數(shù)
//frameworks/native/libs/binder/BpBinder.cpp       159行
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    if (mAlive) {
        // code=ADD_SERVICE_TRANSACTION
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}

Binder代理類調(diào)用transact()方法,真正的工作還是交給IPCThreadState來進行transact工作,先來,看見IPCThreadState:: self的過程。

Binder代理類調(diào)用transact()方法,真正工作還是交給IPCThreadState來進行transact工作。先來 看看IPCThreadState::self的過程。

2.2.1、IPCThreadState::self()函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp     280行
IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS) {
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        // new 了一個  IPCThreadState對象
        return new IPCThreadState; 
    }

    if (gShutdown) return NULL;

    pthread_mutex_lock(&gTLSMutex);
     //首次進入gHaveTLS為false
    if (!gHaveTLS) { 
        // 創(chuàng)建線程的TLS
        if (pthread_key_create(&gTLS, threadDestructor) != 0) { 
            pthread_mutex_unlock(&gTLSMutex);
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}

TLS 是指Thread local storage(線程本地存儲空間),每個線程都擁有自己的TLS,并且是私有空間,線程空間是不會共享的。通過pthread_getspecific/pthread_setspecific函數(shù)可以設置這些空間中的內(nèi)容。從線程本地存儲空間中獲得保存在其中的IPCThreadState對象。

說到 IPCThreadState對象,我們就來看看它的構(gòu)造函數(shù)

2.2.1、IPCThreadState的構(gòu)造函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp     686行  
IPCThreadState::IPCThreadState()
    : mProcess(ProcessState::self()),
      mMyThreadId(gettid()),
      mStrictModePolicy(0),
      mLastTransactionBinderFlags(0)
{
    pthread_setspecific(gTLS, this);
    clearCaller();
    mIn.setDataCapacity(256);
    mOut.setDataCapacity(256);
}

每個線程都有一個IPCThreadState,每個IPCThreadState中都有一個mIn,一個mOut。成員變量mProcess保存了ProccessState變量(每個進程只有一個)

  • mIn:用來接收來自Binder設備的數(shù)據(jù),默認大小為256字節(jié)
  • mOut:用來存儲發(fā)往Binder設備的數(shù)據(jù),默認大小為256字節(jié)
2.2.2、IPCThreadState::transact()函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp     548行
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    //數(shù)據(jù)錯誤檢查
    status_t err = data.errorCheck(); 
    flags |= TF_ACCEPT_FDS;
    //  ***  省略部分代碼  ***
    if (err == NO_ERROR) {
        //傳輸數(shù)據(jù)
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    //  ***  省略部分代碼  ***
    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            //等待響應 
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
    } else {
        //one waitForReponse(NULL,NULL)
        err = waitForResponse(NULL, NULL);
    }
    return err;
}

IPCThreadState進行trancsact事物處理3部分:

  • errorCheck() :負責 數(shù)據(jù)錯誤檢查
  • writeTransactionData(): 負責 傳輸數(shù)據(jù)
  • waitForResponse(): 負責 等待響應

那我們重點看下writeTransactionData()函數(shù)與waitForResponse()函數(shù)

2.2.2.1、writeTransactionData)函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp      904行
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) 
{
            binder_transaction_data tr;
            tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */ 
            // handle=0
            tr.target.handle = handle;
             //code=ADD_SERVICE_TRANSACTION
            tr.code = code;
            // binderFlags=0
            tr.flags = binderFlags;
            tr.cookie = 0;
            tr.sender_pid = 0;
            tr.sender_euid = 0;
            // data為記錄Media服務信息的Parcel對象
            const status_t err = data.errorCheck();
            if (err == NO_ERROR) {
                    tr.data_size = data.ipcDataSize();
                    tr.data.ptr.buffer = data.ipcData();
                    tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
                    tr.data.ptr.offsets = data.ipcObjects();
               } else if (statusBuffer) {
                  tr.flags |= TF_STATUS_CODE;
                    *statusBuffer = err;
                    tr.data_size = sizeof(status_t);
                    tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
                   tr.offsets_size = 0;
                    tr.data.ptr.offsets = 0;
              } else {
                   return (mLastError = err);
                }
           // cmd=BC_TRANSACTION
           mOut.writeInt32(cmd);
          // 寫入binder_transaction_data數(shù)據(jù)
           mOut.write(&tr, sizeof(tr));
           return NO_ERROR;
        }

其中handle的值用來標示目的端,注冊服務過程的目的端為service manager,此處handle=0所對應的是binder_context_mgr_node對象,正是service manager所對應的binder實體對象。其中 binder_transaction_data結(jié)構(gòu)體是binder驅(qū)動通信的數(shù)據(jù)結(jié)構(gòu),該過程最終是把Binder請求碼BC_TRANSACTION和binder_transaction_data寫入mOut。

transact的過程,先寫完binder_transaction_data數(shù)據(jù),接下來執(zhí)行waitForResponse。

2.2.2.2、waitForResponse()函數(shù)
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) {
        uint32_t cmd;
        int32_t err;

        while (1) {
            if ((err = talkWithDriver()) < NO_ERROR) break;
            err = mIn.errorCheck();
            if (err < NO_ERROR) break;
            if (mIn.dataAvail() == 0) continue;

            cmd = (uint32_t) mIn.readInt32();

            IF_LOG_COMMANDS() {
                alog << "Processing waitForResponse Command: "
                        << getReturnString(cmd) << endl;
            }

            switch (cmd) {
                case BR_TRANSACTION_COMPLETE:
                    if (!reply && !acquireResult) goto finish;
                    break;

                case BR_DEAD_REPLY:
                    err = DEAD_OBJECT;
                                goto finish;

                case BR_FAILED_REPLY:
                    err = FAILED_TRANSACTION;
                                goto finish;

                case BR_ACQUIRE_RESULT: {
                    ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                                    const int32_t result = mIn.readInt32();
                    if (!acquireResult) continue;
                                   *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
                }
                            goto finish;

                case BR_REPLY: {
                    binder_transaction_data tr;
                    err = mIn.read( & tr, sizeof(tr));
                    ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                    if (err != NO_ERROR) goto finish;

                    if (reply) {
                        if ((tr.flags & TF_STATUS_CODE) == 0) {
                            reply -> ipcSetDataReference(
                                    reinterpret_cast <const uint8_t * > (tr.data.ptr.buffer),
                                    tr.data_size,
                                    reinterpret_cast <const binder_size_t * > (tr.data.ptr.offsets),
                                    tr.offsets_size / sizeof(binder_size_t),
                                    freeBuffer, this);
                        } else {
                            err = *reinterpret_cast<const status_t * > (tr.data.ptr.buffer);
                            freeBuffer(NULL,
                                    reinterpret_cast <const uint8_t * > (tr.data.ptr.buffer),
                                    tr.data_size,
                                    reinterpret_cast <const binder_size_t * > (tr.data.ptr.offsets),
                                    tr.offsets_size / sizeof(binder_size_t), this);
                        }
                    } else {
                        freeBuffer(NULL,
                                reinterpret_cast <const uint8_t * > (tr.data.ptr.buffer),
                                tr.data_size,
                                reinterpret_cast <const binder_size_t * > (tr.data.ptr.offsets),
                                tr.offsets_size / sizeof(binder_size_t), this);
                        continue;
                    }
                }
                            goto finish;

                default:
                    err = executeCommand(cmd);
                    if (err != NO_ERROR) goto finish;
                    break;
            }
        }

        finish:
        if (err != NO_ERROR) {
            if (acquireResult) *acquireResult = err;
            if (reply) reply -> setError(err);
            mLastError = err;
        }
        return err;
    }

在waitForResponse過程,首先執(zhí)行BR_TRANSACTION_COMPLETE;另外,目標進程收到事物后,處理BR_TRANSACTION事物,然后送法給當前進程,再執(zhí)行BR_REPLY命令。

這里詳細說下talkWithDriver()函數(shù)

2.2.2.3、talkWithDriver()函數(shù)
    status_t IPCThreadState::talkWithDriver(bool doReceive) {
        if (mProcess -> mDriverFD <= 0) {
            return -EBADF;
        }

        binder_write_read bwr;

        // Is the read buffer empty?
            const bool needRead = mIn.dataPosition() >= mIn.dataSize();

        // We don't want to write anything if we are still reading
        // from data left in the input buffer and the caller
        // has requested to read the next data.
            const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

        bwr.write_size = outAvail;
        bwr.write_buffer = (uintptr_t) mOut.data();

        // This is what we'll read.
        if (doReceive && needRead) {
            //接受數(shù)據(jù)緩沖區(qū)信息的填充,如果以后收到數(shù)據(jù),就直接填在mIn中了。
            bwr.read_size = mIn.dataCapacity();
            bwr.read_buffer = (uintptr_t) mIn.data();
        } else {
            bwr.read_size = 0;
            bwr.read_buffer = 0;
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            if (outAvail != 0) {
                alog << "Sending commands to driver: " << indent;
                            const void*cmds = (const void*)bwr.write_buffer;
                            const void*end = ((const uint8_t *)cmds)+bwr.write_size;
                alog << HexDump(cmds, bwr.write_size) << endl;
                while (cmds < end) cmds = printCommand(alog, cmds);
                alog << dedent;
            }
            alog << "Size of receive buffer: " << bwr.read_size
                    << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
        }

        // Return immediately if there is nothing to do.
        // 當讀緩沖和寫緩沖都為空,則直接返回
        if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
        bwr.write_consumed = 0;
        bwr.read_consumed = 0;
        status_t err;
        do {
            IF_LOG_COMMANDS() {
                alog << "About to read/write, write size = " << mOut.dataSize() << endl;
            }
            #if defined(HAVE_ANDROID_OS)
            //通過ioctl不停的讀寫操作,跟Binder驅(qū)動進行通信
            if (ioctl(mProcess -> mDriverFD, BINDER_WRITE_READ, & bwr) >=0)
            err = NO_ERROR;
                    else
            err = -errno;
            #else
            err = INVALID_OPERATION;
            #endif
            if (mProcess -> mDriverFD <= 0) {
                err = -EBADF;
            }
            IF_LOG_COMMANDS() {
                alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
            }
        } while (err == -EINTR);

        IF_LOG_COMMANDS() {
            alog << "Our err: " << (void*)(intptr_t) err << ", write consumed: "
                    << bwr.write_consumed << " (of " << mOut.dataSize()
                    << "), read consumed: " << bwr.read_consumed << endl;
        }

        if (err >= NO_ERROR) {
            if (bwr.write_consumed > 0) {
                if (bwr.write_consumed < mOut.dataSize())
                    mOut.remove(0, bwr.write_consumed);
                else
                    mOut.setDataSize(0);
            }
            if (bwr.read_consumed > 0) {
                mIn.setDataSize(bwr.read_consumed);
                mIn.setDataPosition(0);
            }
            IF_LOG_COMMANDS() {
                TextOutput::Bundle _b(alog);
                alog << "Remaining data size: " << mOut.dataSize() << endl;
                alog << "Received commands from driver: " << indent;
                          const void*cmds = mIn.data();
                           const void*end = mIn.data() + mIn.dataSize();
                alog << HexDump(cmds, mIn.dataSize()) << endl;
                while (cmds < end) cmds = printReturnCommand(alog, cmds);
                alog << dedent;
            }
            return NO_ERROR;
        }
        return err;
    }

binder_write_read結(jié)構(gòu)體 用來與Binder設備交換數(shù)據(jù)的結(jié)構(gòu),通過ioctl與mDriverFD通信,是真正與Binder驅(qū)動進行數(shù)據(jù)讀寫交互的過程。主要操作是mOut和mIn變量。

ioctl經(jīng)過系統(tǒng)調(diào)用后進入Binder Driver
大體流程如下圖


大體流程圖.png

(六)、Binder驅(qū)動

Binder驅(qū)動內(nèi)部調(diào)用了流程

ioctl——> binder_ioctl ——> binder_ioctl_write_read

1、binder_ioctl_write_read()函數(shù)處理
static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread)
{
    struct binder_proc *proc = filp->private_data;
    void __user *ubuf = (void __user *)arg;
    struct binder_write_read bwr;

    //將用戶空間bwr結(jié)構(gòu)體拷貝到內(nèi)核空間
    copy_from_user(&bwr, ubuf, sizeof(bwr));
    //  ***省略部分代碼***
    if (bwr.write_size > 0) {
        //將數(shù)據(jù)放入目標進程
        ret = binder_thread_write(proc, thread,
                      bwr.write_buffer,
                      bwr.write_size,
                      &bwr.write_consumed);
        //  ***省略部分代碼***
    }
    if (bwr.read_size > 0) {
        //讀取自己隊列的數(shù)據(jù) 
        ret = binder_thread_read(proc, thread, bwr.read_buffer,
             bwr.read_size,
             &bwr.read_consumed,
             filp->f_flags & O_NONBLOCK);
        if (!list_empty(&proc->todo))
            wake_up_interruptible(&proc->wait);
           //  ***省略部分代碼***
    }

    //將內(nèi)核空間bwr結(jié)構(gòu)體拷貝到用戶空間
    copy_to_user(ubuf, &bwr, sizeof(bwr));
     //  ***省略部分代碼***
}  
2、binder_thread_write()函數(shù)處理
static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;
    while (ptr < end && thread->return_error == BR_OK) {
        //拷貝用戶空間的cmd命令,此時為BC_TRANSACTION
        if (get_user(cmd, (uint32_t __user *)ptr)) -EFAULT;
        ptr += sizeof(uint32_t);
        switch (cmd) {
        case BC_TRANSACTION:
        case BC_REPLY: {
            struct binder_transaction_data tr;
            //拷貝用戶空間的binder_transaction_data
            if (copy_from_user(&tr, ptr, sizeof(tr)))   return -EFAULT;
            ptr += sizeof(tr);
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
            break;
        }
        //  ***省略部分代碼***
    }
    *consumed = ptr - buffer;
  }
  return 0;
}
3、binder_thread_write()函數(shù)處理
static void binder_transaction(struct binder_proc *proc,
               struct binder_thread *thread,
               struct binder_transaction_data *tr, int reply){

    if (reply) {
        //  ***省略部分代碼***
    }else {
        if (tr->target.handle) {
          //  ***省略部分代碼***
        } else {
            // handle=0則找到servicemanager實體
            target_node = binder_context_mgr_node;
        }
        //target_proc為servicemanager進程
        target_proc = target_node->proc;
    }

    if (target_thread) {
         //  ***省略部分代碼***
    } else {
        //找到servicemanager進程的todo隊列
        target_list = &target_proc->todo;
        target_wait = &target_proc->wait;
    }

    t = kzalloc(sizeof(*t), GFP_KERNEL);
    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);

    //非oneway的通信方式,把當前thread保存到transaction的from字段
    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;
    else
        t->from = NULL;

    t->sender_euid = task_euid(proc->tsk);
    t->to_proc = target_proc; //此次通信目標進程為servicemanager進程
    t->to_thread = target_thread;
    t->code = tr->code;  //此次通信code = ADD_SERVICE_TRANSACTION
    t->flags = tr->flags;  // 此次通信flags = 0
    t->priority = task_nice(current);

    //從servicemanager進程中分配buffer
    t->buffer = binder_alloc_buf(target_proc, tr->data_size,
        tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));

    t->buffer->allow_user_free = 0;
    t->buffer->transaction = t;
    t->buffer->target_node = target_node;

    if (target_node)
        //引用計數(shù)+1
        binder_inc_node(target_node, 1, 0, NULL); 
    offp = (binder_size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

    //分別拷貝用戶空間的binder_transaction_data中ptr.buffer和ptr.offsets到內(nèi)核
    copy_from_user(t->buffer->data,
        (const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size);
    copy_from_user(offp,
        (const void __user *)(uintptr_t)tr->data.ptr.offsets, tr->offsets_size);

    off_end = (void *)offp + tr->offsets_size;

    for (; offp < off_end; offp++) {
        struct flat_binder_object *fp;
        fp = (struct flat_binder_object *)(t->buffer->data + *offp);
        off_min = *offp + sizeof(struct flat_binder_object);
        switch (fp->type) {
            case BINDER_TYPE_BINDER:
            case BINDER_TYPE_WEAK_BINDER: {
              struct binder_ref *ref;
              struct binder_node *node = binder_get_node(proc, fp->binder);
              if (node == NULL) { 
                //服務所在進程 創(chuàng)建binder_node實體
                node = binder_new_node(proc, fp->binder, fp->cookie);
                 //  ***省略部分代碼***
              }
              //servicemanager進程binder_ref
              ref = binder_get_ref_for_node(target_proc, node);
              ...
              //調(diào)整type為HANDLE類型
              if (fp->type == BINDER_TYPE_BINDER)
                fp->type = BINDER_TYPE_HANDLE;
              else
                fp->type = BINDER_TYPE_WEAK_HANDLE;
              fp->binder = 0;
              fp->handle = ref->desc; //設置handle值
              fp->cookie = 0;
              binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
                       &thread->todo);
            } break;
            case :  //  ***省略部分代碼***
    }

    if (reply) {
          //  ***省略部分代碼***
    } else if (!(t->flags & TF_ONE_WAY)) {
        //BC_TRANSACTION 且 非oneway,則設置事務棧信息
        t->need_reply = 1;
        t->from_parent = thread->transaction_stack;
        thread->transaction_stack = t;
    } else {
          //  ***省略部分代碼***
    }
    //將BINDER_WORK_TRANSACTION添加到目標隊列,本次通信的目標隊列為target_proc->todo
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);
    //將BINDER_WORK_TRANSACTION_COMPLETE添加到當前線程的todo隊列
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    //喚醒等待隊列,本次通信的目標隊列為target_proc->wait
    if (target_wait)
        wake_up_interruptible(target_wait);
    return;
}
  • 注冊服務的過程,傳遞的是BBinder對象,因此上面的writeStrongBinder()過程中l(wèi)ocalBinder不為空,從而flat_binder_object.type等于BINDER_TYPE_BINDER。
  • 服務注冊過程是在服務所在進程創(chuàng)建binder_node,在servicemanager進程創(chuàng)建binder_ref。對于同一個binder_node,每個進程只會創(chuàng)建一個binder_ref對象。
  • 向servicemanager的binder_proc->todo添加BINDER_WORK_TRANSACTION事務,接下來進入ServiceManager進程。

這里說下這個函數(shù)里面涉及的三個重要函數(shù)

  • binder_get_node()
  • binder_new_node()
  • binder_get_ref_for_node()
3.1、binder_get_node()函數(shù)處理
//     /kernel/drivers/android/binder.c     904行
static struct binder_node *binder_get_node(struct binder_proc *proc,
             binder_uintptr_t ptr)
{
  struct rb_node *n = proc->nodes.rb_node;
  struct binder_node *node;

  while (n) {
    node = rb_entry(n, struct binder_node, rb_node);

    if (ptr < node->ptr)
      n = n->rb_left;
    else if (ptr > node->ptr)
      n = n->rb_right;
    else
      return node;
  }
  return NULL;
}

從binder_proc來根據(jù)binder指針ptr值,查詢相應的binder_node

3.2、binder_new_node()函數(shù)處理
//kernel/drivers/android/binder.c      923行
static struct binder_node *binder_new_node(struct binder_proc *proc,
                       binder_uintptr_t ptr,
                       binder_uintptr_t cookie)
{
    struct rb_node **p = &proc->nodes.rb_node;
    struct rb_node *parent = NULL;
    struct binder_node *node;
        //第一次進來是空
    while (*p) {
        parent = *p;
        node = rb_entry(parent, struct binder_node, rb_node);

        if (ptr < node->ptr)
            p = &(*p)->rb_left;
        else if (ptr > node->ptr)
            p = &(*p)->rb_right;
        else
            return NULL;
    }
        //給創(chuàng)建的binder_node 分配內(nèi)存空間
    node = kzalloc(sizeof(*node), GFP_KERNEL);
    if (node == NULL)
        return NULL;
    binder_stats_created(BINDER_STAT_NODE);
        //將創(chuàng)建的node對象添加到proc紅黑樹
    rb_link_node(&node->rb_node, parent, p);
    rb_insert_color(&node->rb_node, &proc->nodes);
    node->debug_id = ++binder_last_id;
    node->proc = proc;
    node->ptr = ptr;
    node->cookie = cookie;
        //設置binder_work的type
    node->work.type = BINDER_WORK_NODE;
    INIT_LIST_HEAD(&node->work.entry);
    INIT_LIST_HEAD(&node->async_todo);
    binder_debug(BINDER_DEBUG_INTERNAL_REFS,
             "%d:%d node %d u%016llx c%016llx created\n",
             proc->pid, current->pid, node->debug_id,
             (u64)node->ptr, (u64)node->cookie);
    return node;
}
3.3、binder_get_ref_for_node()函數(shù)處理
//    kernel/drivers/android/binder.c      1066行
static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
              struct binder_node *node)
{
  struct rb_node *n;
  struct rb_node **p = &proc->refs_by_node.rb_node;
  struct rb_node *parent = NULL;
  struct binder_ref *ref, *new_ref;
  //從refs_by_node紅黑樹,找到binder_ref則直接返回。
  while (*p) {
    parent = *p;
    ref = rb_entry(parent, struct binder_ref, rb_node_node);

    if (node < ref->node)
      p = &(*p)->rb_left;
    else if (node > ref->node)
      p = &(*p)->rb_right;
    else
      return ref;
  }
  
  //創(chuàng)建binder_ref
  new_ref = kzalloc_preempt_disabled(sizeof(*ref));
  
  new_ref->debug_id = ++binder_last_id;
  //記錄進程信息
  new_ref->proc = proc; 
  // 記錄binder節(jié)點
  new_ref->node = node; 
  rb_link_node(&new_ref->rb_node_node, parent, p);
  rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);

  //計算binder引用的handle值,該值返回給target_proc進程
  new_ref->desc = (node == binder_context_mgr_node) ? 0 : 1;
  //從紅黑樹最最左邊的handle對比,依次遞增,直到紅黑樹遍歷結(jié)束或者找到更大的handle則結(jié)束。
  for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
    //根據(jù)binder_ref的成員變量rb_node_desc的地址指針n,來獲取binder_ref的首地址
    ref = rb_entry(n, struct binder_ref, rb_node_desc);
    if (ref->desc > new_ref->desc)
      break;
    new_ref->desc = ref->desc + 1;
  }

  // 將新創(chuàng)建的new_ref 插入proc->refs_by_desc紅黑樹
  p = &proc->refs_by_desc.rb_node;
  while (*p) {
    parent = *p;
    ref = rb_entry(parent, struct binder_ref, rb_node_desc);

    if (new_ref->desc < ref->desc)
      p = &(*p)->rb_left;
    else if (new_ref->desc > ref->desc)
      p = &(*p)->rb_right;
    else
      BUG();
  }
  rb_link_node(&new_ref->rb_node_desc, parent, p);
  rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);
  if (node) {
    hlist_add_head(&new_ref->node_entry, &node->refs);
  } 
  return new_ref;
}

handle值計算方法規(guī)律:

  • 每個進程binder_proc所記錄的binder_ref的handle值是從1開始遞增的
  • 所有進程binder_proc所記錄的handle=0的binder_ref都指向service manager
  • 同一服務的binder_node在不同進程的binder_ref的handle值可以不同

(七)、ServiceManager流程

關于ServiceManager的啟動流程,我這里就不詳細講解了。啟動后,就會循環(huán)在binder_loop()過程,當來消息后,會調(diào)用binder_parse()函數(shù)

1、binder_parse()函數(shù)
// framework/native/cmds/servicemanager/binder.c      204行
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;

    while (ptr < end) {
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
        switch(cmd) {
        case BR_TRANSACTION: {
            struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
            // *** 省略部分源碼 ***
            binder_dump_txn(txn);
            if (func) {
                unsigned rdata[256/4];
                struct binder_io msg; 
                struct binder_io reply;
                int res;

                bio_init(&reply, rdata, sizeof(rdata), 4);
                //從txn解析出binder_io信息
                bio_init_from_txn(&msg, txn); 
                 // 收到Binder事務 
                res = func(bs, txn, &msg, &reply);
                // 發(fā)送reply事件
                binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
            }
            ptr += sizeof(*txn);
            break;
        }
        case :   // *** 省略部分源碼 ***
    }
    return r;
}
2、svcmgr_handler()函數(shù)
//frameworks/native/cmds/servicemanager/service_manager.c  
    244行  
int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;
    // *** 省略部分源碼 ***
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
     // *** 省略部分源碼 ***
    switch(txn->code) {
      case SVC_MGR_ADD_SERVICE: 
          s = bio_get_string16(msg, &len);
          ...
          handle = bio_get_ref(msg); //獲取handle
          allow_isolated = bio_get_uint32(msg) ? 1 : 0;
           //注冊指定服務 
          if (do_add_service(bs, s, len, handle, txn->sender_euid,
              allow_isolated, txn->sender_pid))
              return -1;
          break;
       case : // *** 省略部分源碼 ***
    }

    bio_put_uint32(reply, 0);
    return 0;
}
3、do_add_service()函數(shù)
// frameworks/native/cmds/servicemanager/service_manager.c     194行
int do_add_service(struct binder_state *bs,
                   const uint16_t *s, size_t len,
                   uint32_t handle, uid_t uid, int allow_isolated,
                   pid_t spid)
{
    struct svcinfo *si;

    if (!handle || (len == 0) || (len > 127))
        return -1;

    //權(quán)限檢查
    if (!svc_can_register(s, len, spid)) {
        return -1;
    }

    //服務檢索
    si = find_svc(s, len);
    if (si) {
        if (si->handle) {
            //服務已經(jīng)注冊時,釋放相應的服務
            svcinfo_death(bs, si); 
        }
        si->handle = handle;
    } else {
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        //內(nèi)存不足時,無法分配足夠的內(nèi)存
        if (!si) { 
            return -1;
        }
        si->handle = handle;
        si->len = len;
         //內(nèi)存拷貝服務信息
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); 
        si->name[len] = '\0';
        si->death.func = (void*) svcinfo_death;
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;
        //svclist保存所有已注冊的服務
        si->next = svclist; 
        svclist = si;
    }

    //以BC_ACQUIRE命令,handle為目標的信息,通過ioctl發(fā)送給binder驅(qū)動
    binder_acquire(bs, handle);
    //以BC_REQUEST_DEATH_NOTIFICATION命令的信息,通過ioctl發(fā)送給binder驅(qū)動,主要用于清理內(nèi)存等收尾工作。
    binder_link_to_death(bs, handle, &si->death);
    return 0;
}

svcinfo記錄著服務名和handle信息

4、binder_send_reply()函數(shù)
// frameworks/native/cmds/servicemanager/binder.c   170行
void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       binder_uintptr_t buffer_to_free,
                       int status)
{
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
        uint32_t cmd_reply;
        struct binder_transaction_data txn;
    } __attribute__((packed)) data;
     //free buffer命令
    data.cmd_free = BC_FREE_BUFFER; 
    data.buffer = buffer_to_free;
    // reply命令
    data.cmd_reply = BC_REPLY; 
    data.txn.target.ptr = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {
     // *** 省略部分源碼 ***
    } else {
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
        data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
    }
    //向Binder驅(qū)動通信
    binder_write(bs, &data, sizeof(data));
}

binder_write進去binder驅(qū)動后,將BC_FREE_BUFFER和BC_REPLY命令協(xié)議發(fā)送給Binder驅(qū)動,向Client端發(fā)送reply
binder_write進入binder驅(qū)動后,將BC_FREE_BUFFER和BC_REPLY命令協(xié)議發(fā)送給Binder驅(qū)動, 向client端發(fā)送reply.

(八)、總結(jié)

服務注冊過程(addService)核心功能:在服務所在進程創(chuàng)建的binder_node,在servicemanager進程創(chuàng)建binder_ref。其中binder_ref的desc在同一個進程內(nèi)是唯一的:

  • 每個進程binder_proc所記錄的binder_ref的handle值是從1開始遞增的
  • 所有進程binder_proc所記錄的bandle=0的binder_ref指向service manager
  • 同一個服務的binder_node在不同的進程的binder_ref的handle值可以不同

Media服務注冊的過程設計到MediaPlayerService(作為Cliient進程)和Service Manager(作為Service 進程),通信的流程圖如下:

Media服務注冊流程.png

過程分析:

  • 1、MediaPlayerService進程調(diào)用 ioctl()向Binder驅(qū)動發(fā)送IPC數(shù)據(jù),該過程可以理解成一個事物 binder_transaction (記為BT1),執(zhí)行當前操作線程的binder_thread(記為 thread1),則BT1 ->from_parent=NULL, BT1 ->from=thread1,thread1 ->transaction_stack=T1。其中IPC數(shù)據(jù)內(nèi)容包括:
    • Binder協(xié)議為BC_TRANSACTION
    • Handle等于0
    • PRC代碼為ADD_SERVICE
    • PRC數(shù)據(jù)為"media.player"
  • 2、Binder驅(qū)動收到該Binder請求。生成BR_TRANSACTION命令,選擇目標處理該請求的線程,即ServiceManager的binder線程(記為thread2),則T1->to_parent=NULL,T1 -> to_thread=thread2,并將整個binder_transaction數(shù)據(jù)(記為BT2)插入到目標線程的todo隊列。
  • 3、Service Manager的線程thread收到BT2后,調(diào)用服務注冊函數(shù)將服務“media.player”注冊到服務目錄中。當服務注冊完成,生成IPC應答數(shù)據(jù)(BC_REPLY),BT2->from_parent=BT1,BT2 ->from=thread2,thread2->transaction_stack=BT2。
  • 4、Binder驅(qū)動收到該Binder應答請求,生成BR_REPLY命令,BT2->to_parent=BT1,BT2->to_thread1,thread1->transaction_stack=BT2。在MediaPlayerService收到該命令后,知道服務注冊完成便可以正常使用。

五、獲取服務

(一) 源碼位置

/frameworks/av/media/libmedia/
  - IMediaDeathNotifier.cpp

framework/native/libs/binder/
  - Binder.cpp
  - BpBinder.cpp
  - IPCThreadState.cpp
  - ProcessState.cpp
  - IServiceManager.cpp

對應的鏈接為

在Native層的服務注冊,我們依舊選擇media為例展開講解,先來看看media類關系圖。

(二)、類圖

類圖.png

圖解:

  • 藍色:代表獲取MediaPlayerService服務相關的類
  • 綠色:代表Binder架構(gòu)中與Binder驅(qū)動通信過程中的最為核心的兩個雷
  • 紫色:代表 注冊服務獲取服務 的公共接口/父類

(二)、獲取服務流程

1、getMediaPlayerService()函數(shù)
//frameworks/av/media/libmedia/IMediaDeathNotifier.cpp   35行
sp<IMediaPlayerService>&
IMediaDeathNotifier::getMediaPlayerService()
{
    Mutex::Autolock _l(sServiceLock);
    if (sMediaPlayerService == 0) {
         // 獲取 ServiceManager
        sp<IServiceManager> sm = defaultServiceManager(); 
        sp<IBinder> binder;
        do {
            //獲取名為"media.player"的服務
            binder = sm->getService(String16("media.player"));
            if (binder != 0) {
                break;
            }
            usleep(500000); // 0.5s
        } while (true);

        if (sDeathNotifier == NULL) {
            // 創(chuàng)建死亡通知對象
            sDeathNotifier = new DeathNotifier(); 
        }

        //將死亡通知連接到binder
        binder->linkToDeath(sDeathNotifier);
        sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
    }
    return sMediaPlayerService;
}

其中defaultServiceManager()過程在上面已經(jīng)說了,返回的是BpServiceManager

在請求獲取名為"media.player"的服務過程中,采用不斷循環(huán)獲取的方法。由于MediaPlayerService服務可能還沒向ServiceManager注冊完成或者尚未啟動完成等情況,故則binder返回NULL,休眠0.5s后繼續(xù)請求,知道獲取服務為止。

2、BpServiceManager.getService()函數(shù)
//frameworks/native/libs/binder/IServiceManager.cpp       134行
virtual sp<IBinder> getService(const String16& name) const
    {
        unsigned n;
        for (n = 0; n < 5; n++){
            sp<IBinder> svc = checkService(name); 
            if (svc != NULL) return svc;
            sleep(1);
        }
        return NULL;
    }

通過BpServiceManager來獲取MediaPlayer服務:檢索服務是否存在,當服務存在則返回相應的服務,當服務不存在則休眠1s再繼續(xù)檢索服務。該循環(huán)進行5次。為什么循環(huán)5次?這估計和Android的ANR的時間為5s相關。如果每次都無法獲取服務,循環(huán)5次,每次循環(huán)休眠1s,忽略checkService()的時間,差不多是5s的時間。

3、BpSeriveManager.checkService()函數(shù)
//frameworks/native/libs/binder/IServiceManager.cpp    146行
virtual sp<IBinder> checkService( const String16& name) const
{
    Parcel data, reply;
    //寫入RPC頭
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    //寫入服務名
    data.writeString16(name);
    remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply); 
    return reply.readStrongBinder(); 
}

檢索制定服務是否存在,其中remote()為BpBinder

4、BpBinder::transact()函數(shù)
// /frameworks/native/libs/binder/BpBinder.cpp    159行
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}

Binder代理類調(diào)用transact()方法,真正工作還是交給IPCThreadState來進行transact工作。

4.1、IPCThreadState::self()函數(shù)
IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS) {
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        //初始化 IPCThreadState
        return new IPCThreadState;  
    }

    if (gShutdown) return NULL;
    pthread_mutex_lock(&gTLSMutex);
     //首次進入gHaveTLS為false
    if (!gHaveTLS) {
         //創(chuàng)建線程的TLS
        if (pthread_key_create(&gTLS, threadDestructor) != 0) { 
            pthread_mutex_unlock(&gTLSMutex);
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}

TLS是指Thread local storage(線程本地存儲空間),每個線程都擁有自己的TLS,并且是私有空間,線程之間不會共享。通過pthread_getspecific()/pthread_setspecific()函數(shù)可以獲取/設置這些空間中的內(nèi)容。從線程本地存儲空間獲的保存期中的IPCThreadState對象。

以后面的流程和上面的注冊流程大致相同,主要流程也是 IPCThreadState:: transact()函數(shù)、IPCThreadState::writeTransactionData()函數(shù)、IPCThreadState::waitForResponse()函數(shù)和IPCThreadState.talkWithDriver()函數(shù),由于上面已經(jīng)講解過了,這里就不詳細說明了。我們從IPCThreadState.talkWithDriver() 開始繼講解
4.2、IPCThreadState:: talkWithDriver()函數(shù)
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    ...
    binder_write_read bwr;
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();
     //接收數(shù)據(jù)緩沖區(qū)信息的填充。如果以后收到數(shù)據(jù),就直接填在mIn中了。
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }
    //當讀緩沖和寫緩沖都為空,則直接返回
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        //通過ioctl不停的讀寫操作,跟Binder Driver進行通信
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        ...
       //當被中斷,則繼續(xù)執(zhí)行
    } while (err == -EINTR); 
    ...
    return err;
}

binder_write_read結(jié)構(gòu)體 用來與Binder設備交換數(shù)據(jù)的結(jié)構(gòu),通過ioctl與mDriverFD通信,是真正的與Binder驅(qū)動進行數(shù)據(jù)讀寫交互的過程。先向service manager進程發(fā)送查詢服務的請求(BR_TRANSACTION)。當service manager 進程收到帶命令后,會執(zhí)行do_find_service()查詢服務所對應的handle,然后再binder_send_reply()應發(fā)送者,發(fā)送BC_REPLY協(xié)議,然后再調(diào)用binder_transaction(),再向服務請求者的todo隊列插入事務。接下來,再看看binder_transaction過程。

讓我們繼續(xù)看下binder_transaction的過程

4.2.1、binder_transaction()函數(shù)
//kernel/drivers/android/binder.c     1827行
static void binder_transaction(struct binder_proc *proc,
               struct binder_thread *thread,
               struct binder_transaction_data *tr, int reply){
    //根據(jù)各種判定,獲取以下信息:
    // 目標線程
    struct binder_thread *target_thread; 
    // 目標進程
    struct binder_proc *target_proc; 
     /// 目標binder節(jié)點   
    struct binder_node *target_node;    
    // 目標 TODO隊列
    struct list_head *target_list;    
     // 目標等待隊列
    wait_queue_head_t *target_wait;    
    ...
    
    //分配兩個結(jié)構(gòu)體內(nèi)存
    struct binder_transaction *t = kzalloc(sizeof(*t), GFP_KERNEL);
    struct binder_work *tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    //從target_proc分配一塊buffer
    t->buffer = binder_alloc_buf(target_proc, tr->data_size,

    for (; offp < off_end; offp++) {
        switch (fp->type) {
        case BINDER_TYPE_BINDER: ...
        case BINDER_TYPE_WEAK_BINDER: ...
        
        case BINDER_TYPE_HANDLE: 
        case BINDER_TYPE_WEAK_HANDLE: {
          struct binder_ref *ref = binder_get_ref(proc, fp->handle,
                fp->type == BINDER_TYPE_HANDLE);
          ...
          //此時運行在servicemanager進程,故ref->node是指向服務所在進程的binder實體,
          //而target_proc為請求服務所在的進程,此時并不相等。
          if (ref->node->proc == target_proc) {
            if (fp->type == BINDER_TYPE_HANDLE)
              fp->type = BINDER_TYPE_BINDER;
            else
              fp->type = BINDER_TYPE_WEAK_BINDER;
            fp->binder = ref->node->ptr;
             // BBinder服務的地址
            fp->cookie = ref->node->cookie; 
            binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);
            
          } else {
            struct binder_ref *new_ref;
            //請求服務所在進程并非服務所在進程,則為請求服務所在進程創(chuàng)建binder_ref
            new_ref = binder_get_ref_for_node(target_proc, ref->node);
            fp->binder = 0;
             //重新給handle賦值
            fp->handle = new_ref->desc; 
            fp->cookie = 0;
            binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
          }
        } break;
        
        case BINDER_TYPE_FD: ...
        }
    }
    //分別target_list和當前線程TODO隊列插入事務
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    if (target_wait)
        wake_up_interruptible(target_wait);
    return;
}

這個過程非常重要,分兩種情況來說:

  • 情況1 當請求服務的進程與服務屬于不同的進程,則為請求服務所在進程創(chuàng)建binder_ref對象,指向服務進程中的binder_node
  • 當請求服務的進程與服務屬于同一進程,則不再創(chuàng)建新對象,只是引用計數(shù)+1,并且修改type為BINDER_TYPE_BINER或BINDER_TYPE_WEAK_BINDER。
4.2.2、binder_thread_read()函數(shù)
//kernel/drivers/android/binder.c    2650行
binder_thread_read(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed, int non_block){
    ...
    //當線程todo隊列有數(shù)據(jù)則執(zhí)行往下執(zhí)行;當線程todo隊列沒有數(shù)據(jù),則進入休眠等待狀態(tài)
    ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
    ...
    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;
        //先從線程todo隊列獲取事務數(shù)據(jù)
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work, entry);
        // 線程todo隊列沒有數(shù)據(jù), 則從進程todo對獲取事務數(shù)據(jù)
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            ...
        }
        switch (w->type) {
            case BINDER_WORK_TRANSACTION:
                //獲取transaction數(shù)據(jù)
                t = container_of(w, struct binder_transaction, work);
                break;
                
            case : ...  
        }

        //只有BINDER_WORK_TRANSACTION命令才能繼續(xù)往下執(zhí)行
        if (!t) continue;

        if (t->buffer->target_node) {
            ...
        } else {
            tr.target.ptr = NULL;
            tr.cookie = NULL;
            //設置命令為BR_REPLY
            cmd = BR_REPLY; 
        }
        tr.code = t->code;
        tr.flags = t->flags;
        tr.sender_euid = t->sender_euid;

        if (t->from) {
            struct task_struct *sender = t->from->proc->tsk;
            //當非oneway的情況下,將調(diào)用者進程的pid保存到sender_pid
            tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns);
        } else {
            ...
        }

        tr.data_size = t->buffer->data_size;
        tr.offsets_size = t->buffer->offsets_size;
        tr.data.ptr.buffer = (void *)t->buffer->data +
                    proc->user_buffer_offset;
        tr.data.ptr.offsets = tr.data.ptr.buffer +
                    ALIGN(t->buffer->data_size,
                        sizeof(void *));

        //將cmd和數(shù)據(jù)寫回用戶空間
        put_user(cmd, (uint32_t __user *)ptr);
        ptr += sizeof(uint32_t);
        copy_to_user(ptr, &tr, sizeof(tr));
        ptr += sizeof(tr);

        list_del(&t->work.entry);
        t->buffer->allow_user_free = 1;
        if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
            ...
        } else {
            t->buffer->transaction = NULL;
            //通信完成則運行釋放
            kfree(t); 
        }
        break;
    }
done:
    *consumed = ptr - buffer;
    if (proc->requested_threads + proc->ready_threads == 0 &&
        proc->requested_threads_started < proc->max_threads &&
        (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
         BINDER_LOOPER_STATE_ENTERED))) {
        proc->requested_threads++;
        // 生成BR_SPAWN_LOOPER命令,用于創(chuàng)建新的線程
        put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer);
    }
    return 0;
}

4.3、readStrongBinder()函數(shù)
//frameworks/native/libs/binder/Parcel.cpp   1334行
sp<IBinder> Parcel::readStrongBinder() const
{
    sp<IBinder> val;
    unflatten_binder(ProcessState::self(), *this, &val);
    return val;
}

里面主要是調(diào)用unflatten_binder()函數(shù)
那我們就來詳細看下

4.3.1、unflatten_binder()函數(shù)
status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);
    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                // 當請求服務的進程與服務屬于同一進程
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
                //請求服務的進程與服務屬于不同進程
                *out = proc->getStrongProxyForHandle(flat->handle);
                //創(chuàng)建BpBinder對象
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }
    }
    return BAD_TYPE;
}

如果服務的進程與服務屬于不同的進程會調(diào)用getStrongProxyForHandle()函數(shù),那我們就好好研究下

4.3.2、getStrongProxyForHandle()函數(shù)
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);
    //查找handle對應的資源項[2.9.3]
    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            ...
            //當handle值所對應的IBinder不存在或弱引用無效時,則創(chuàng)建BpBinder對象
            b = new BpBinder(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}

readStrong的功能是flat_binder_object解析并創(chuàng)建BpBinder對象

4.3.2、getStrongProxyForHandle()函數(shù)
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
    const size_t N=mHandleToObject.size();
    //當handle大于mHandleToObject的長度時,進入該分支
    if (N <= (size_t)handle) {
        handle_entry e;
        e.binder = NULL;
        e.refs = NULL;
        //從mHandleToObject的第N個位置開始,插入(handle+1-N)個e到隊列中
        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
        if (err < NO_ERROR) return NULL;
    }
    return &mHandleToObject.editItemAt(handle);
}

根據(jù)handle值來查找對應的handle_entry。

(三)、死亡通知

死亡通知時為了讓Bp端知道Bn端的生死情況

  • DeathNotifier是繼承IBinder::DeathRecipient類,主要需要實現(xiàn)其binderDied()來進行死亡通告。
  • 注冊:binder->linkToDeath(sDeathNotifier)是為了將sDeathNotifier死亡通知注冊到Binder上。

Bp端只需要覆寫binderDied()方法,實現(xiàn)一些后尾清楚類的工作,則在Bn端死掉后,會回調(diào)binderDied()進行相應處理

1、linkToDeath()函數(shù)
// frameworks/native/libs/binder/BpBinder.cpp   173行
status_t BpBinder::linkToDeath(
    const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags)
{
    Obituary ob;
    ob.recipient = recipient;
    ob.cookie = cookie;
    ob.flags = flags;

    {
        AutoMutex _l(mLock);
        if (!mObitsSent) {
            if (!mObituaries) {
                mObituaries = new Vector<Obituary>;
                if (!mObituaries) {
                    return NO_MEMORY;
                }
                getWeakRefs()->incWeak(this);
                IPCThreadState* self = IPCThreadState::self();
                self->requestDeathNotification(mHandle, this);
                self->flushCommands();
            }
            ssize_t res = mObituaries->add(ob);
            return res >= (ssize_t)NO_ERROR ? (status_t)NO_ERROR : res;
        }
    }
    return DEAD_OBJECT;
}

里面調(diào)用了requestDeathNotification()函數(shù)

2、requestDeathNotification()函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp    670行
status_t IPCThreadState::requestDeathNotification(int32_t handle, BpBinder* proxy)
{
    mOut.writeInt32(BC_REQUEST_DEATH_NOTIFICATION);
    mOut.writeInt32((int32_t)handle);
    mOut.writePointer((uintptr_t)proxy);
    return NO_ERROR;
}

向binder driver發(fā)送 BC_REQUEST_DEATH_NOTIFICATION命令。后面的流程和 Service Manager 里面的 ** binder_link_to_death() ** 的過程。

3、binderDied()函數(shù)
//frameworks/av/media/libmedia/IMediaDeathNotifier.cpp    78行
void IMediaDeathNotifier::DeathNotifier::binderDied(const wp<IBinder>& who __unused) {
    SortedVector< wp<IMediaDeathNotifier> > list;
    {
        Mutex::Autolock _l(sServiceLock);
        // 把Bp端的MediaPlayerService清除掉
        sMediaPlayerService.clear();   
        list = sObitRecipients;
    }

    size_t count = list.size();
    for (size_t iter = 0; iter < count; ++iter) {
        sp<IMediaDeathNotifier> notifier = list[iter].promote();
        if (notifier != 0) {
            //當MediaServer掛了則通知應用程序,應用程序回調(diào)該方法
            notifier->died(); 
        }
    }
}

客戶端進程通過Binder驅(qū)動獲得Binder的代理(BpBinder),死亡通知注冊的過程就是客戶端進程向Binder驅(qū)動注冊的一個死亡通知,該死亡通知關聯(lián)BBinder,即與BpBinder所對應的服務端。

4、unlinkToDeath()函數(shù)

當Bp在收到服務端的死亡通知之前先掛了,那么需要在對象的銷毀方法內(nèi),調(diào)用unlinkToDeath()來取消死亡通知;

//frameworks/av/media/libmedia/IMediaDeathNotifier.cpp    101行
IMediaDeathNotifier::DeathNotifier::~DeathNotifier()
{
    Mutex::Autolock _l(sServiceLock);
    sObitRecipients.clear();
    if (sMediaPlayerService != 0) {
        IInterface::asBinder(sMediaPlayerService)->unlinkToDeath(this);
    }
}
5、觸發(fā)時機

每當service進程退出時,service manager 會收到來自Binder驅(qū)動的死亡通知。這項工作在啟動Service Manager時通過 binder_link_to_death(bs, ptr, &si->death)完成。另外,每個Bp端也可以自己注冊死亡通知,能獲取Binder的死亡消息,比如前面的IMediaDeathNotifier。

那Binder的死亡通知時如何被出發(fā)的?對于Binder的IPC進程都會打開/dev/binder文件,當進程異常退出的時候,Binder驅(qū)動會保證釋放將要退出的進程中沒有正常關閉的/dev/binder文件,實現(xiàn)機制是binder驅(qū)動通過調(diào)用/dev/binder文件所對應的release回調(diào)函數(shù),執(zhí)行清理工作,并且檢查BBinder是否有注冊死亡通知,當發(fā)現(xiàn)存在死亡通知時,那么久向其對應的BpBinder端發(fā)送死亡通知消息。

(三)總結(jié)

在請求服務(getService)的過程,當執(zhí)行到binder_transaction()時,會區(qū)分請求服務所屬進程情況。

  • 當請求服務的進程與服務屬于不同進程,則為請求服務所在進程創(chuàng)binder_ref對象,指向服務進程的binder_noder
  • 當請求服務的進程與服務屬于同一進程, 則不再創(chuàng)建新對象,只是引用計數(shù)+1,并且修改type為BINDER_TYPE_BINDER或BINDER_TYPE_WEAK_BINDER。
    • 最終readStrongBinder(),返回的是BB對象的真實子類
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內(nèi)容