Binder框架 -- 用戶空間和驅(qū)動(dòng)的交互

Binder框架 -- 用戶空間和驅(qū)動(dòng)的交互

MediaPlayerServic 啟動(dòng)的過程中,代碼可以簡(jiǎn)化為:

  1. sp<IServiceManager> sm = new BpServiceManager(BpBinder(0));
  2. sm->addService(String16("media.player"), new MediaPlayerService());

針對(duì)上面的兩步,看下用戶態(tài)程序和內(nèi)核驅(qū)動(dòng)是怎么交互的。

Parcel

在用戶程序和內(nèi)核的交互中,Android提供了Parcel 類幫助我們對(duì)需要傳遞的數(shù)據(jù)進(jìn)行封裝,其實(shí)就是數(shù)據(jù)的序列化,按照類型和先后順序?qū)懭氲絻?nèi)存中。Parcel 內(nèi)部的存儲(chǔ)區(qū)域主要有兩個(gè),mData和mObjects, mData 存儲(chǔ)基本數(shù)據(jù)類型,mObjects存儲(chǔ)Binder 數(shù)據(jù)類型。Parcel 提供了針對(duì)各種數(shù)據(jù)寫入和讀取的操作函數(shù)。這兩塊區(qū)域都是使用 malloc 分配出來(lái)。

uint8_t*            mData;
size_t              mDataSize;
size_t              mDataCapacity;
mutable size_t      mDataPos;

binder_size_t*      mObjects;
size_t              mObjectsSize;
size_t              mObjectsCapacity;
mutable size_t      mNextObjectHint;

flat_binder_object

在Parcel 的序列化中,Binder 對(duì)象使用flat_binder_object 結(jié)構(gòu)體保存。同時(shí)提供了flatten_binder 和unflatten_binder 函數(shù)用于序列化和反序列化。

struct flat_binder_object {
  __u32 type;
  __u32 flags;
  union {
    binder_uintptr_t binder;
    __u32 handle;
  };
  binder_uintptr_t cookie;
};
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const wp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;

    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    if (binder != NULL) {
        sp<IBinder> real = binder.promote();
        if (real != NULL) {
            IBinder *local = real->localBinder();
            if (!local) {
                BpBinder *proxy = real->remoteBinder();
                if (proxy == NULL) {
                    ALOGE("null proxy");
                }
                const int32_t handle = proxy ? proxy->handle() : 0;
                obj.type = BINDER_TYPE_WEAK_HANDLE;
                obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
                obj.handle = handle;
                obj.cookie = 0;
            } else {
                obj.type = BINDER_TYPE_WEAK_BINDER;
                obj.binder = reinterpret_cast<uintptr_t>(binder.get_refs());
                obj.cookie = reinterpret_cast<uintptr_t>(binder.unsafe_get());
            }
            return finish_flatten_binder(real, obj, out);
        }

        // XXX How to deal?  In order to flatten the given binder,
        // we need to probe it for information, which requires a primary
        // reference...  but we don't have one.
        //
        // The OpenBinder implementation uses a dynamic_cast<> here,
        // but we can't do that with the different reference counting
        // implementation we are using.
        ALOGE("Unable to unflatten Binder weak reference!");
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = 0;
        obj.cookie = 0;
        return finish_flatten_binder(NULL, obj, out);

    } else {
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = 0;
        obj.cookie = 0;
        return finish_flatten_binder(NULL, obj, out);
    }
}

status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);

    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
                *out = proc->getStrongProxyForHandle(flat->handle);
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }
    }
    return BAD_TYPE;
}

status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, wp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);

    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_WEAK_BINDER:
                if (flat->binder != 0) {
                    out->set_object_and_refs(
                        reinterpret_cast<IBinder*>(flat->cookie),
                        reinterpret_cast<RefBase::weakref_type*>(flat->binder));
                } else {
                    *out = NULL;
                }
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
            case BINDER_TYPE_WEAK_HANDLE:
                *out = proc->getWeakProxyForHandle(flat->handle);
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->unsafe_get()), *flat, in);
        }
    }
    return BAD_TYPE;
}

new BpServiceManager(BpBinder(0))

BpServiceManager 的構(gòu)造函數(shù)以BpBinder 為參數(shù),在BpBinder 的onFirstRef 中完成了驅(qū)動(dòng)強(qiáng)引用計(jì)數(shù)加1,構(gòu)造函數(shù)中完成了弱引用計(jì)數(shù)的+1,這個(gè)都比較簡(jiǎn)單。

BpBinder::BpBinder(int32_t handle)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
{
    ALOGV("Creating BpBinder %p handle %d\n", this, mHandle);

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    IPCThreadState::self()->incWeakHandle(handle);
}

void BpBinder::onFirstRef()
{
    ALOGV("onFirstRef BpBinder %p handle %d\n", this, mHandle);
    IPCThreadState* ipc = IPCThreadState::self();
    if (ipc) ipc->incStrongHandle(mHandle);
}
void IPCThreadState::incStrongHandle(int32_t handle)
{
    LOG_REMOTEREFS("IPCThreadState::incStrongHandle(%d)\n", handle);
    mOut.writeInt32(BC_ACQUIRE);
    mOut.writeInt32(handle);
}

addService

virtual status_t addService(const String16& name, const sp<IBinder>& service, bool allowIsolated)
{
    Parcel data, reply;
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    data.writeString16(name);
    data.writeStrongBinder(service);
    data.writeInt32(allowIsolated ? 1 : 0);
    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
    return err == NO_ERROR ? reply.readExceptionCode() : err;
}

localBinder 和remoteBinder

android 提供了三個(gè)API用于方位本進(jìn)程的binder, 根據(jù)服務(wù)端和代理端的不同,這三個(gè)API可以做不同的區(qū)分,實(shí)現(xiàn)了多態(tài)。 這是三個(gè)虛函數(shù)函數(shù)定義在IBinder中,BBinder 和BpBinder 根據(jù)需要,BBinder 實(shí)現(xiàn)了localBinder, BpBinder 實(shí)現(xiàn)了remoteBinder,函數(shù)的名字與其所代表的含義也恰如其分,BBinder 在服務(wù)端,所以是localBinder, BpBinde 在代理端, 對(duì)服務(wù)端來(lái)說,是remote。

function IBinder BBinder BpBinder
queryLocalInterface NULL this
localBinder NULL this
remoteBinder NULL this

writeStrongBinder

Parcel 提供的 writeStrongBinder 是對(duì)binder類型的數(shù)據(jù)進(jìn)行序列化操作,這一點(diǎn)從內(nèi)部調(diào)用的函數(shù)flatten_binder名字上也可以看出來(lái)。序列化的時(shí)候首先判斷是代理端還是服務(wù)端,用的是localBinder 函數(shù)進(jìn)行判斷。為空,表明是代理端,用remoteBinder 獲取BpBinder.不為空則為服務(wù)端,這里不為空。序列化以后的Binder 存儲(chǔ)在 flat_binder_object結(jié)構(gòu)體中。注意flat_binder_object 的賦值。BBinder 序列化:

  1. type = BINDER_TYPE_BINDER
  2. binder = reinterpret_cast<uintptr_t>(local->getWeakRefs()); // 弱引用對(duì)象
  3. cookie = reinterpret_cast<uintptr_t>(local); //this 對(duì)象,BBinder本身
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}

status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;

    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    if (binder != NULL) {
        IBinder *local = binder->localBinder();
        if (!local) {
            ......
        } else {
            obj.type = BINDER_TYPE_BINDER;
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            obj.cookie = reinterpret_cast<uintptr_t>(local);
        }
    } else {
        ......
    }

    return finish_flatten_binder(binder, obj, out);
}

inline static status_t finish_flatten_binder(
    const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
{
    return out->writeObject(flat, false);
}

status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData)
{
    const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;
    const bool enoughObjects = mObjectsSize < mObjectsCapacity;
    if (enoughData && enoughObjects) {
restart_write:
        // 通過類型轉(zhuǎn)換,對(duì)Parcle 的對(duì)象存儲(chǔ)區(qū)域賦值保存, 好隱晦
        *reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;  

        // remember if it's a file descriptor
        if (val.type == BINDER_TYPE_FD) {
            if (!mAllowFds) {
                return FDS_NOT_ALLOWED;
            }
            mHasFds = mFdsKnown = true;
        }

        // Need to write meta-data?
        // nullMetaDAta == false  val.binder != 0, 參考上面的序列化賦值過程
        if (nullMetaData || val.binder != 0) {
            mObjects[mObjectsSize] = mDataPos;
            // 強(qiáng)弱引用計(jì)數(shù)加減
            acquire_object(ProcessState::self(), val, this, &mOpenAshmemSize);
            mObjectsSize++;
        }

        return finishWrite(sizeof(flat_binder_object));
    }

    // 存儲(chǔ)區(qū)域的內(nèi)存分配過程
    if (!enoughData) {
        const status_t err = growData(sizeof(val));
        if (err != NO_ERROR) return err;
    }
    if (!enoughObjects) {
        size_t newSize = ((mObjectsSize+2)*3)/2;
        if (newSize < mObjectsSize) return NO_MEMORY;   // overflow
        binder_size_t* objects = (binder_size_t*)realloc(mObjects, newSize*sizeof(binder_size_t));
        if (objects == NULL) return NO_MEMORY;
        mObjects = objects;
        mObjectsCapacity = newSize;
    }

    goto restart_write;
}

transact 過程

Parcel 數(shù)據(jù)的序列化完成以后,就需要把Parcel 數(shù)據(jù)傳遞給服務(wù)端,transact 出馬了。具體的條用流程根據(jù)前面的分析是:BpBinder#transact => IPCThreadState#transact. BpBinder 的調(diào)用傳遞了三個(gè)參數(shù),第一個(gè)是cmd, 第二個(gè)是傳入的Parcel, 第三個(gè)保存返回值。

status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);

BpBinder::transact

BpBinder::transact 中參數(shù)增加了一個(gè)mHandler, 在BpServiceManager 的構(gòu)造中,mHandler 被賦值為0。在這里代表要傳遞的服務(wù)。后邊會(huì)討論這個(gè)hander 的來(lái)源和代表的意義。

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

IPCThreadState::transact

  1. writeTransactionData
  2. waitForResponse
  3. talkWithDriver
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();
    flags |= TF_ACCEPT_FDS;
    
    if (err == NO_ERROR) {
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}
writeTransactionData

在IPCThreadState::transact 中最重要的函數(shù)調(diào)用是writeTransactionData,在這里 傳入的Parcel 數(shù)據(jù) 以及handle code 被封裝為能和kernel 識(shí)別的數(shù)據(jù)類型:struct binder_transaction_data。數(shù)據(jù)類型被標(biāo)記為 BC_TRANSACTION。 然后也被寫入IPCThreadState 的Parcel mOut的內(nèi)存區(qū)域中。這時(shí)候已經(jīng)有三個(gè)Parcel ,兩個(gè)參數(shù)data 和 reply, 還有一個(gè)IPCThreadState 的內(nèi)部成員mOut, mOut 保存了binder_transaction_data.

struct binder_transaction_data {
  union {
    __u32 handle;
    binder_uintptr_t ptr;
  } target;               // 傳遞給那個(gè)Binder 
  binder_uintptr_t cookie;
  __u32 code;             // 服務(wù)內(nèi)部的命令字
  __u32 flags;
  pid_t sender_pid;
  uid_t sender_euid;
  binder_size_t data_size;       // 數(shù)據(jù)區(qū)域的大小
  binder_size_t offsets_size;    // 對(duì)象區(qū)域的大小, 名字好怪 是不是objs_size 更合適
  union {
    struct {
      binder_uintptr_t buffer;     // 數(shù)據(jù)區(qū)域地址
      binder_uintptr_t offsets;    // 對(duì)象區(qū)域的地址 變量命名值得商議
    } ptr;
    __u8 buf[8];
  } data;                // 描述的傳遞數(shù)據(jù)
};

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;   // 不適用ptr 指針, 使用handle 查找binder.
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();      // 數(shù)據(jù)區(qū)域的大小
        tr.data.ptr.buffer = data.ipcData();    // 數(shù)據(jù)區(qū)域地址
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); // 對(duì)象區(qū)域的大小,名字好怪 是不是objs_size 更合適
        tr.data.ptr.offsets = data.ipcObjects(); // 對(duì)象區(qū)域的地址
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }
    
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
    
    return NO_ERROR;
}
waitForResponse

waitForResponse 中調(diào)用talkWithDriver

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        ......
        
     }
}
talkWithDriver

talkWithDriver 要和驅(qū)動(dòng)通信了,在這里所有的數(shù)據(jù)又被一個(gè)新的結(jié)構(gòu)體標(biāo)記了,struct binder_write_read.為什么是標(biāo)記而不是取代,binder_write_read 僅僅描述了IPCThreadState 的兩個(gè)Parcel 大小,已經(jīng)內(nèi)部?jī)?nèi)存的地址。ioctl 調(diào)用BINDER_WRITE_READ 命令進(jìn)入內(nèi)核, 傳遞的參數(shù)是 struct binder_write_read。mIn 用來(lái)存放從內(nèi)核的返回?cái)?shù)據(jù)。

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    
    binder_write_read bwr;
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }
    
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
    } while (err == -EINTR);

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
     
        return NO_ERROR;
    }
    
    return err;
}

數(shù)據(jù)的轉(zhuǎn)移和封裝

在transact 過程中,需要傳遞的數(shù)據(jù)由最初的基本數(shù)據(jù)類型和Binder 類型被層層封裝。最后有struct binder_write_read 結(jié)構(gòu)體描述,struct binder_write_read 作為ioctl 的參數(shù)傳遞到內(nèi)核中。下面的圖展示了數(shù)據(jù)的封裝過程。

binder_ioctl

binder_ioctl_write_read

進(jìn)入內(nèi)核以后,binder_ioctl 根據(jù)cmd 選擇binder_ioctl_write_read,binder_ioctl_write_read 調(diào)用copy_from_user 從用戶空間copy binder_write_read 結(jié)構(gòu)體,第一次copy。 和用戶空間封裝數(shù)據(jù)的順序相反,然后根據(jù)寫數(shù)據(jù)的大小和讀數(shù)據(jù)空間的大小進(jìn)行讀寫操作。

static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;
    struct binder_write_read bwr;

    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
        ret = -EFAULT;
        goto out;
    }

    if (bwr.write_size > 0) {
        ret = binder_thread_write(proc, thread,
                      bwr.write_buffer,
                      bwr.write_size,
                      &bwr.write_consumed);
        trace_binder_write_done(ret);
        if (ret < 0) {
            bwr.read_consumed = 0;
            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                ret = -EFAULT;
            goto out;
        }
    }
    
    if (bwr.read_size > 0) {
        ret = binder_thread_read(proc, thread, bwr.read_buffer,
                     bwr.read_size,
                     &bwr.read_consumed,
                     filp->f_flags & O_NONBLOCK);
        trace_binder_read_done(ret);
        if (!list_empty(&proc->todo))
            wake_up_interruptible(&proc->wait);
        if (ret < 0) {
            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                ret = -EFAULT;
            goto out;
        }
    }
    
    ......
}

binder_thread_write

get_user 第二次copy, 讀出binder_transaction_data 數(shù)據(jù)的命令字,這里為BC_TRANSACTION,僅僅一個(gè)int 大小。第三次copy_from_user copy 讀出binder_transaction_data 結(jié)構(gòu)體

static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    while (ptr < end && thread->return_error == BR_OK) {
        if (get_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        trace_binder_command(cmd);
        if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
            binder_stats.bc[_IOC_NR(cmd)]++;
            proc->stats.bc[_IOC_NR(cmd)]++;
            thread->stats.bc[_IOC_NR(cmd)]++;
        }
        switch (cmd) {
        ......
        
        case BC_TRANSACTION:
        case BC_REPLY: {
            struct binder_transaction_data tr;

            if (copy_from_user(&tr, ptr, sizeof(tr)))
                return -EFAULT;
            ptr += sizeof(tr);
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
            break;
        }
       ......
       
        default:
            return -EINVAL;
        }
        *consumed = ptr - buffer;
    }
    return 0;
}

binder_transaction

  1. binder_transaction 中首先根據(jù)handler 查找目標(biāo)的binder的target_node, handler == 0 .
  2. 構(gòu)造binder_transaction 結(jié)構(gòu)體,這個(gè)結(jié)構(gòu)體僅僅驅(qū)動(dòng)內(nèi)部使用,用來(lái)在不同的進(jìn)程中傳遞數(shù)據(jù)。調(diào)用 binder_alloc_buf,為數(shù)據(jù)復(fù)制準(zhǔn)備空間。binder_alloc_buf 除了分配空間,還需要把內(nèi)核空間地址計(jì)算出 target_node 所代表的進(jìn)程用戶空間地址。
  3. copy_from_user, 復(fù)制 數(shù)據(jù)域數(shù)據(jù)。第四次copy
  4. copy_from_user, 復(fù)制 對(duì)象域數(shù)據(jù)。第五次copy
  5. while 循環(huán),將對(duì)象域的所有binder 數(shù)據(jù)解析出來(lái), 還是用 表示
  6. 前邊 writeStrongBinder 的時(shí)候binder type == BINDER_TYPE_BINDER。
  7. binder_get_node 獲取 自身的binder_node 節(jié)點(diǎn),這個(gè)時(shí)候還沒有,所以new一個(gè)。
  8. binder_get_ref_for_node, 找到對(duì)target_node 節(jié)點(diǎn)的引用 binder_ref * ref
  9. 對(duì)解析出來(lái)的結(jié)果處理,我們傳遞進(jìn)來(lái)的IBinder 為MediaPlayerService, 實(shí)質(zhì)上是一個(gè)BnMediaPlayerService, 那 BnMediaPlayerService 的hanler, 在這里被賦值 fp->handle = ref->desc ref 為binder_get_ref_for_node 對(duì)target_node 的引用。這個(gè)值得大小及意義后邊分析。在這里,找到了Binder 對(duì)象的handle 的出生之處。
  10. 將binder_transaction 加入到target_node 所代表的進(jìn)程的target_list 隊(duì)列中,實(shí)際上是todo 隊(duì)列。
  11. 喚醒target_node 進(jìn)程。到這里完成了進(jìn)程的切換。
static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply)
{
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    binder_size_t *offp, *off_end;
    binder_size_t off_min;
    struct binder_proc *target_proc;
    struct binder_thread *target_thread = NULL;
    struct binder_node *target_node = NULL;
    struct list_head *target_list;
    wait_queue_head_t *target_wait;
    struct binder_transaction *in_reply_to = NULL;
    struct binder_transaction_log_entry *e;
    uint32_t return_error;

    if (reply) {
        
    } else {
        if (tr->target.handle) {      // target.handle= 0, 走else 分支
        
        } else {
            target_node = binder_context_mgr_node;
            if (target_node == NULL) {
                return_error = BR_DEAD_REPLY;
                goto err_no_context_mgr_node;
            }
        }
        target_proc = target_node->proc;
    
        if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
            struct binder_transaction *tmp;

            tmp = thread->transaction_stack;
            if (tmp->to_thread != thread) {
                    proc->pid, thread->pid, tmp->debug_id,
                    tmp->to_proc ? tmp->to_proc->pid : 0,
                    tmp->to_thread ?
                    tmp->to_thread->pid : 0);
                return_error = BR_FAILED_REPLY;
                goto err_bad_call_stack;
            }
            while (tmp) {
                if (tmp->from && tmp->from->proc == target_proc)
                    target_thread = tmp->from;
                tmp = tmp->from_parent;
            }
        }
    }
    
    if (target_thread) {
        e->to_thread = target_thread->pid;
        target_list = &target_thread->todo;
        target_wait = &target_thread->wait;
    } else {   // target_thread 為空
        target_list = &target_proc->todo;
        target_wait = &target_proc->wait;
    }

    /* TODO: reuse incoming transaction for reply */
    // t 為binder_transaction, 構(gòu)造 binder_transaction結(jié)構(gòu)體
    t = kzalloc(sizeof(*t), GFP_KERNEL);
    if (t == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_t_failed;
    }
    binder_stats_created(BINDER_STAT_TRANSACTION);

    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    if (tcomplete == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_tcomplete_failed;
    }
    binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

    t->debug_id = ++binder_last_id;  // 計(jì)數(shù),全局變量

    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;
    else
        t->from = NULL;
    t->sender_euid = task_euid(proc->tsk);
    t->to_proc = target_proc;
    t->to_thread = target_thread;
    t->code = tr->code;
    t->flags = tr->flags;
    t->priority = task_nice(current);

    trace_binder_transaction(reply, t, target_node);

    t->buffer = binder_alloc_buf(target_proc, tr->data_size,
        tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
    if (t->buffer == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_binder_alloc_buf_failed;
    }
    t->buffer->allow_user_free = 0;
    t->buffer->debug_id = t->debug_id;
    t->buffer->transaction = t;
    t->buffer->target_node = target_node;
    trace_binder_transaction_alloc_buf(t->buffer);
    if (target_node)
        binder_inc_node(target_node, 1, 0, NULL);

    offp = (binder_size_t *)(t->buffer->data +
                 ALIGN(tr->data_size, sizeof(void *)));

    if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
               tr->data.ptr.buffer, tr->data_size)) {
        binder_user_error("%d:%d got transaction with invalid data ptr\n",
                proc->pid, thread->pid);
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }
    if (copy_from_user(offp, (const void __user *)(uintptr_t)
               tr->data.ptr.offsets, tr->offsets_size)) {
        binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
                proc->pid, thread->pid);
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }

    off_end = (void *)offp + tr->offsets_size;
    off_min = 0;
    for (; offp < off_end; offp++) {
        struct flat_binder_object *fp;

        fp = (struct flat_binder_object *)(t->buffer->data + *offp);
        off_min = *offp + sizeof(struct flat_binder_object);
        switch (fp->type) {
        case BINDER_TYPE_BINDER:
        case BINDER_TYPE_WEAK_BINDER: {
            struct binder_ref *ref;
            struct binder_node *node = binder_get_node(proc, fp->binder);

            if (node == NULL) {
                node = binder_new_node(proc, fp->binder, fp->cookie);
                if (node == NULL) {
                    return_error = BR_FAILED_REPLY;
                    goto err_binder_new_node_failed;
                }
                node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
                node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
            }

            ref = binder_get_ref_for_node(target_proc, node);
            if (ref == NULL) {
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_for_node_failed;
            }
            if (fp->type == BINDER_TYPE_BINDER)
                fp->type = BINDER_TYPE_HANDLE;
            else
                fp->type = BINDER_TYPE_WEAK_HANDLE;
            fp->binder = 0;
            fp->handle = ref->desc;
            fp->cookie = 0;
            binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
                       &thread->todo);

            trace_binder_transaction_node_to_ref(t, node, ref);
            binder_debug(BINDER_DEBUG_TRANSACTION,
                     "        node %d u%016llx -> ref %d desc %d\n",
                     node->debug_id, (u64)node->ptr,
                     ref->debug_id, ref->desc);
        } break;
        default:
            return_error = BR_FAILED_REPLY;
            goto err_bad_object_type;
        }
    }
    
    if (reply) {
        BUG_ON(t->buffer->async_transaction != 0);
        binder_pop_transaction(target_thread, in_reply_to);
    } else if (!(t->flags & TF_ONE_WAY)) {
        BUG_ON(t->buffer->async_transaction != 0);
        t->need_reply = 1;
        t->from_parent = thread->transaction_stack;
        thread->transaction_stack = t;
    } else {
        BUG_ON(target_node == NULL);
        BUG_ON(t->buffer->async_transaction != 1);
        if (target_node->has_async_transaction) {
            target_list = &target_node->async_todo;
            target_wait = NULL;
        } else
            target_node->has_async_transaction = 1;
    }
    
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    
    if (target_wait)
        wake_up_interruptible(target_wait);
    return;
   
   ......
}

binder_thread_read

binder_thread_read 返回,binder_thread_read 開始了,由于無(wú)數(shù)據(jù),被wait_event_freezable_exclusive 阻塞。

handler 的身世

在binder 通信中handler 代表了一個(gè)Binder 對(duì)象,那么這個(gè)hander 的真是意義到底是什么呢。上邊我們看到了handler出處,看下這個(gè)handler 到底從哪里來(lái),代表什么含義。fp 為flat_binder_object 對(duì)象, handler 被賦值為binder_refs 對(duì)象的desc. ServiceManger服務(wù)一定是0,其他的服務(wù)在這個(gè)基礎(chǔ)上+1, 看下這個(gè)desc 的計(jì)算過程,本進(jìn)程中的代理對(duì)象的 binder_ref 最大值 計(jì)數(shù)+1, 也就是說本進(jìn)程的服務(wù)計(jì)數(shù)+1。所以不同的服務(wù),可能有相同的handler。這個(gè)handler 為什么不交給ServiceManager 管理呢,所有的服務(wù)統(tǒng)一引用計(jì)數(shù)

ref = binder_get_ref_for_node(target_proc, node);
     
fp->binder = 0;
fp->handle = ref->desc;
fp->cookie = 0;

static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
                          struct binder_node *node)
{
    struct rb_node *n;
    struct rb_node **p = &proc->refs_by_node.rb_node;
    struct rb_node *parent = NULL;
    struct binder_ref *ref, *new_ref;

   ......

    new_ref->desc = (node == binder_context_mgr_node) ? 0 : 1;
    for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
        ref = rb_entry(n, struct binder_ref, rb_node_desc);
        if (ref->desc > new_ref->desc)
            break;
        new_ref->desc = ref->desc + 1;
    }

   ......
    
    return new_ref;
}

ServiceManger 進(jìn)程的binder_thread_read

ServiceManagere 對(duì)應(yīng)的驅(qū)動(dòng)的隊(duì)列被喚醒,binder_thread_read 開始工作。

  1. binder_thread_read 首先從TODO 隊(duì)列中取出需要完成的事物,注意是兩個(gè),write 的時(shí)候?qū)懱砑恿藘蓚€(gè)。首先取出 BINDER_WORK_TRANSACTION, binder_transaction。
  2. 然后將binder_transaction 結(jié)構(gòu)體轉(zhuǎn)化為 binder_transaction_data 結(jié)構(gòu)體,這樣用戶態(tài)程序可以訪問,cmd 轉(zhuǎn)換。
  3. 將buffer 內(nèi)核的地址轉(zhuǎn)化為用戶空間地址。依次調(diào)用put_user copy_to_user 將相關(guān)數(shù)據(jù)傳遞到用戶空間。
  4. 取出 BINDER_WORK_TRANSACTION_COMPLETE binder_transaction。將數(shù)據(jù)傳遞到用戶空間
static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block)
{
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    int ret = 0;
    int wait_for_proc_work;

    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }

retry:
    wait_for_proc_work = thread->transaction_stack == NULL &&
                list_empty(&thread->todo);


    thread->looper |= BINDER_LOOPER_STATE_WAITING;
    if (wait_for_proc_work)
        proc->ready_threads++;

    binder_unlock(__func__);
    ......
    binder_lock(__func__);

    if (wait_for_proc_work)
        proc->ready_threads--;
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    if (ret)
        return ret;

    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work,
                         entry);
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            w = list_first_entry(&proc->todo, struct binder_work,
                         entry);
        } else {
            /* no data added */
            if (ptr - buffer == 4 &&
                !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN))
                goto retry;
            break;
        }

        if (end - ptr < sizeof(tr) + 4)
            break;

        switch (w->type) {
        case BINDER_WORK_TRANSACTION: {
            t = container_of(w, struct binder_transaction, work);
        } break;
        case BINDER_WORK_TRANSACTION_COMPLETE: {
            cmd = BR_TRANSACTION_COMPLETE;
            if (put_user(cmd, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);

            binder_stat_br(proc, thread, cmd);

            list_del(&w->entry);
            kfree(w);
            binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
        } break;

        if (!t)
            continue;

        BUG_ON(t->buffer == NULL);
        if (t->buffer->target_node) {
            struct binder_node *target_node = t->buffer->target_node;

            tr.target.ptr = target_node->ptr;
            tr.cookie =  target_node->cookie;
            t->saved_priority = task_nice(current);
            if (t->priority < target_node->min_priority &&
                !(t->flags & TF_ONE_WAY))
                binder_set_nice(t->priority);
            else if (!(t->flags & TF_ONE_WAY) ||
                 t->saved_priority > target_node->min_priority)
                binder_set_nice(target_node->min_priority);
            cmd = BR_TRANSACTION;
        } else {
            tr.target.ptr = 0;
            tr.cookie = 0;
            cmd = BR_REPLY;
        }
        tr.code = t->code;
        tr.flags = t->flags;
        tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);

        if (t->from) {
            struct task_struct *sender = t->from->proc->tsk;

            tr.sender_pid = task_tgid_nr_ns(sender,
                            task_active_pid_ns(current));
        } else {
            tr.sender_pid = 0;
        }

        tr.data_size = t->buffer->data_size;
        tr.offsets_size = t->buffer->offsets_size;
        tr.data.ptr.buffer = (binder_uintptr_t)(
                    (uintptr_t)t->buffer->data +
                    proc->user_buffer_offset);
        tr.data.ptr.offsets = tr.data.ptr.buffer +
                    ALIGN(t->buffer->data_size,
                        sizeof(void *));

        if (put_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        if (copy_to_user(ptr, &tr, sizeof(tr)))
            return -EFAULT;
        ptr += sizeof(tr);

        trace_binder_transaction_received(t);
        binder_stat_br(proc, thread, cmd);
                list_del(&t->work.entry);
        t->buffer->allow_user_free = 1;
        if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
            t->to_parent = thread->transaction_stack;
            t->to_thread = thread;
            thread->transaction_stack = t;
        } else {
            t->buffer->transaction = NULL;
            kfree(t);
            binder_stats_deleted(BINDER_STAT_TRANSACTION);
        }
        break;
    }
    
    return 0;
}

ServiceManager 的ioctl 阻塞返回。


mmap 的設(shè)計(jì)和binder 效率的討論

有這么一種說法是 mmap 映射了用戶空間和內(nèi)核空間的地址,減少了一次從內(nèi)核空間到用戶地址空間的copy, 所以提高了效率。從上面的分析看

  1. 從用戶態(tài)到內(nèi)核, 發(fā)生了4次拷貝
  2. 從內(nèi)核到用戶態(tài), 發(fā)生了一共 2 +2 =4 次copy, 僅僅減少了1次。

所以 binder 驅(qū)動(dòng)的效率提升不在這里。為什么要使用mmap, 從binder 的設(shè)計(jì)上來(lái)看,讀寫操作沒有誰(shuí)用標(biāo)準(zhǔn)的linux 接口read和write, 那么從內(nèi)核傳遞數(shù)據(jù)到用戶態(tài)如何完成呢,ioctl 的第三個(gè)參數(shù)只能是從用戶到內(nèi)核傳遞地址。mmap 的設(shè)計(jì)是為了解決從內(nèi)核傳遞數(shù)據(jù)到用戶這一功能。

那為什么Binder 效率比傳統(tǒng)的IPC要高效呢。
內(nèi)核資源競(jìng)爭(zhēng) 在傳統(tǒng)的IPC中,read 和 write 操作都需要對(duì)內(nèi)核中的 讀寫的buffer 進(jìn)行資源競(jìng)爭(zhēng),解決并發(fā)方位的問題,一方面加鎖操作消耗了一定的資源,另一方面,各個(gè)操作需要排隊(duì)等待。

Binder 為了解決內(nèi)內(nèi)核中 buffer 的競(jìng)爭(zhēng),采用了為每個(gè)線程單獨(dú)分配空間,避免了內(nèi)核態(tài)的等待。這樣效率就提高了,但是內(nèi)核態(tài)不需要等待,用戶空間的線程不一定能夠及時(shí)的操作,所以binder 在用戶態(tài)又引用了多線程支持。從內(nèi)核態(tài)和用戶態(tài)解決了資源訪問等待的問題。這是binder 效率高的真正原因。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 229,117評(píng)論 6 537
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 98,860評(píng)論 3 423
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 177,128評(píng)論 0 381
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)。 經(jīng)常有香客問我,道長(zhǎng),這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,291評(píng)論 1 315
  • 正文 為了忘掉前任,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 72,025評(píng)論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 55,421評(píng)論 1 324
  • 那天,我揣著相機(jī)與錄音,去河邊找鬼。 笑死,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,477評(píng)論 3 444
  • 文/蒼蘭香墨 我猛地睜開眼,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 42,642評(píng)論 0 289
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 49,177評(píng)論 1 335
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 40,970評(píng)論 3 356
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 43,157評(píng)論 1 371
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,717評(píng)論 5 362
  • 正文 年R本政府宣布,位于F島的核電站,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 44,410評(píng)論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,821評(píng)論 0 28
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 36,053評(píng)論 1 289
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 51,896評(píng)論 3 395
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 48,157評(píng)論 2 375

推薦閱讀更多精彩內(nèi)容