前言:
iOS對任務的處理運用了多種線程技術,我們常用的有NSOperation和GCD,這篇文章著重研究GCD的原理。
一、GCD簡介:
1、基本概念:
全稱為Grand central dispatch,純C語言封裝成多個功能強大的函數,從而完成iOS任務在執行過程中全面、嚴謹的管理;在代碼方面的簡單解釋:通過隊列
調度任務
,進一步依賴線程函數
去執行,怎么去理解這句話呢:
①、任務的解釋:
這個比較好理解,即iOS中的代碼實現的行為事件,每個事件都可被當做一個任務,GCD中封裝在Block內;
②、隊列的解釋:
- 隊列可以起到調度任務的作用,我理解為任務執行的第一層控制;
- 所謂串行隊列就是我的任務1、2調度有明顯的順序
依賴關系
,隊列調度任務時,是按照FIFO(first in first out 先進先出)
原則來的,必須任務1執行完了才執行任務2; - 并發隊列則和串行隊列有明顯的區別,廣義上的理解是同時間調取多條任務,但相互之間調度
沒有依賴關系
;
③、線程函數的解釋:
任務僅僅靠隊列調度一個去控制任務的執行夠不夠?顯然是不夠滿足需求的,我沒辦法利用到CPU的多核優勢,而線程函數可以充分利用多核優勢,提高運行效率:
- 同步函數:
dispatch_sync
:
必須等待
函數內任務執行完,才會繼續往下走,不會開啟線程; - 異步函數:
dispatch_async
不用等待
函數內任務執行完畢,會開辟線程
處理block任務;
2、GCD的優勢:
①、GCD是蘋果設計并推薦使用的多線程處理方式,充分利用設備cpu多核硬件配置,提高任務處理效率
;
②、GCD會自動管理
線程的生命周期,包括創建線程、任務調度、線程的銷毀,不需要寫線程的管理代碼;
③、任務的代碼封裝在block中
,沒有參數、沒有返回值。
3、GCD中隊列與函數的搭配:
二、GCD隊列源碼分析:
1、主隊列與全局隊列:
①、主隊列:
- 概念:專門調度任務在主線程上處理的串行隊列,
dispatch_get_main_queue()
; - 特征:不會開啟線程,如果當前主線程有任務在執行,那么無論主隊列中當前被添加了什么任務,都不會被調度 ;
- 主隊列源碼探究:
dispatch_queue_t serial = dispatch_queue_create("A", DISPATCH_QUEUE_SERIAL);
dispatch_queue_t conque = dispatch_queue_create("B", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_t mainQueue = dispatch_get_main_queue();
dispatch_queue_t globQueue = dispatch_get_global_queue(0, 0);
NSLog(@"%@-%@-%@-%@",serial,conque,mainQueue,globQueue);
上述代碼,創建分別有標識"A"、"B"的隊列,再獲取主隊列和全局隊列,查看打印結果:
<OS_dispatch_queue_serial: A>-<OS_dispatch_queue_concurrent: B>-<OS_dispatch_queue_main: com.apple.main-thread>-<OS_dispatch_queue_global: com.apple.root.default-qos>
發現字符串A和B都接在隊列的打印結果>
的前面,像是一個字符串的賦值,那么main_queue的字符串賦值倒推為com.apple.main-thread
;另外在這個方法調用里面打個斷點,如下圖:
利用lldb輸入bt指令打印方法堆棧看到如下:
bt
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 7.1
* frame #0: 0x0000000106b9dcc7 Test`__29-[ViewController viewDidLoad]_block_invoke_2(.block_descriptor=0x0000000106ba1108) at ViewController.m:30:9
frame #1: 0x0000000106e0f7ec libdispatch.dylib`_dispatch_call_block_and_release + 12
frame #2: 0x0000000106e109c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #3: 0x0000000106e1ee75 libdispatch.dylib`_dispatch_main_queue_callback_4CF + 1152
可以看到相關的方法寫在libdispatch.dylib
庫里面,跳到此庫搜com.apple.main-thread
字符串,看到這段函數:
struct dispatch_queue_static_s _dispatch_main_q = {
DISPATCH_GLOBAL_OBJECT_HEADER(queue_main),
#if !DISPATCH_USE_RESOLVERS
.do_targetq = _dispatch_get_default_queue(true),
#endif
.dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) |
DISPATCH_QUEUE_ROLE_BASE_ANON,
.dq_label = "com.apple.main-thread",
.dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1),
.dq_serialnum = 1,
};
②、全局隊列
概念與特征:為了方便程序員的使用,蘋果提供了全局隊列
dispatch_get_global_queue(0, 0)
,全局隊列是一個并發隊列
,在使用多線程開發時,如果對隊列沒有特殊需求,在執行異步任務時,可以直接使用全局隊列全局隊列源碼探究:
同理,來到了這段函數:
struct dispatch_queue_global_s _dispatch_root_queues[] = {
#define _DISPATCH_ROOT_QUEUE_IDX(n, flags) \
((flags & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) ? \
DISPATCH_ROOT_QUEUE_IDX_##n##_QOS_OVERCOMMIT : \
DISPATCH_ROOT_QUEUE_IDX_##n##_QOS)
#define _DISPATCH_ROOT_QUEUE_ENTRY(n, flags, ...) \
[_DISPATCH_ROOT_QUEUE_IDX(n, flags)] = { \
DISPATCH_GLOBAL_OBJECT_HEADER(queue_global), \
.dq_state = DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE, \
.do_ctxt = _dispatch_root_queue_ctxt(_DISPATCH_ROOT_QUEUE_IDX(n, flags)), \
.dq_atomic_flags = DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL), \
.dq_priority = flags | ((flags & DISPATCH_PRIORITY_FLAG_FALLBACK) ? \
_dispatch_priority_make_fallback(DISPATCH_QOS_##n) : \
_dispatch_priority_make(DISPATCH_QOS_##n, 0)), \
__VA_ARGS__ \
}
_DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, 0,
.dq_label = "com.apple.root.maintenance-qos",
.dq_serialnum = 4,
),
...
_DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT, DISPATCH_PRIORITY_FLAG_FALLBACK,
.dq_label = "com.apple.root.default-qos",
.dq_serialnum = 10,
),
...
_DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
.dq_label = "com.apple.root.user-interactive-qos.overcommit",
.dq_serialnum = 15,
),
};
到這里有點復雜,不過貌似dq_serialnum
這句代碼類似,主隊列為1
,全局隊列為4-15
,這是什么意思呢?是否認為dq_serialnum=1
時為主隊列?貌似不夠嚴謹,因為他們還有個本質區別
,一個是串行隊列
、一個是并發隊列
;
那么全局搜索下dq_serialnum,發現有個賦值函數:
外層的函數名像是個queue的初始化,是個關鍵信息;本身的
dq_serialnum
的賦值函數os_atomic_inc_orig有個&取值
的東西,點進去看看,來了一段關鍵信息:
// skip zero
// 1 - main_q
// 2 - mgr_q
// 3 - mgr_root_q
// 4,5,6,7,8,9,10,11,12,13,14,15 - global queues
// 17 - workloop_fallback_q
// we use 'xadd' on Intel, so the initial value == next assigned
#define DISPATCH_QUEUE_SERIAL_NUMBER_INIT 17
第二行有個關鍵備注//1 - main_q,// 4,5,6,7,8,9,10,11,12,13,14,15 - global queues
,這就清晰了,dq_serialnum=1時主隊列,4-15是全局隊列;
這樣探究還不夠,但我們看隊列的創建方法dispatch_queue_create
,返回值是dispatch_queue_t
,依次查看方法調用鏈路,為dispatch_queue_create
——>_dispatch_lane_create_with_target
,來到一段非常長且關鍵的代碼了:
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
dispatch_queue_t tq, bool legacy)
{
dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);//串行、并行隊列的宏定義處理后
//
// Step 1: Normalize arguments (qos, overcommit, tq) 規范化參數,去除上行代碼參數做處理
//
dispatch_qos_t qos = dqai.dqai_qos;
#if !HAVE_PTHREAD_WORKQUEUE_QOS
if (qos == DISPATCH_QOS_USER_INTERACTIVE) {
dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED;
}
if (qos == DISPATCH_QOS_MAINTENANCE) {
dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND;
}
#endif // !HAVE_PTHREAD_WORKQUEUE_QOS
_dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit;
if (overcommit != _dispatch_queue_attr_overcommit_unspecified && tq) {
if (tq->do_targetq) {
DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and "
"a non-global target queue");
}
}
if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) {
// Handle discrepancies between attr and target queue, attributes win
if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
overcommit = _dispatch_queue_attr_overcommit_enabled;
} else {
overcommit = _dispatch_queue_attr_overcommit_disabled;
}
}
if (qos == DISPATCH_QOS_UNSPECIFIED) {
qos = _dispatch_priority_qos(tq->dq_priority);
}
tq = NULL;
} else if (tq && !tq->do_targetq) {
// target is a pthread or runloop root queue, setting QoS or overcommit
// is disallowed
if (overcommit != _dispatch_queue_attr_overcommit_unspecified) {
DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute "
"and use this kind of target queue");
}
} else {
if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
// Serial queues default to overcommit!
overcommit = dqai.dqai_concurrent ?
_dispatch_queue_attr_overcommit_disabled :
_dispatch_queue_attr_overcommit_enabled;
}
}
if (!tq) {
tq = _dispatch_get_root_queue(
qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,
overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;
if (unlikely(!tq)) {
DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute");
}
}
//
// Step 2: Initialize the queue
//
if (legacy) {
// if any of these attributes is specified, use non legacy classes
if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) {
legacy = false;
}
}
const void *vtable;
dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;
if (dqai.dqai_concurrent) {
vtable = DISPATCH_VTABLE(queue_concurrent);
} else {
vtable = DISPATCH_VTABLE(queue_serial);
}
switch (dqai.dqai_autorelease_frequency) {
case DISPATCH_AUTORELEASE_FREQUENCY_NEVER:
dqf |= DQF_AUTORELEASE_NEVER;
break;
case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM:
dqf |= DQF_AUTORELEASE_ALWAYS;
break;
}
if (label) {
const char *tmp = _dispatch_strdup_if_mutable(label);
if (tmp != label) {
dqf |= DQF_LABEL_NEEDS_FREE;
label = tmp;
}
}
dispatch_lane_t dq = _dispatch_object_alloc(vtable,
sizeof(struct dispatch_lane_s));
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
dq->dq_label = label;
dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
dqai.dqai_relpri);
if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
}
if (!dqai.dqai_inactive) {
_dispatch_queue_priority_inherit_from_target(dq, tq);
_dispatch_lane_inherit_wlh_from_target(dq, tq);
}
_dispatch_retain(tq);
dq->do_targetq = tq;
_dispatch_object_debug(dq, "%s", __func__);
return _dispatch_trace_queue_create(dq)._dq;
}
這什么玩意兒?先看return吧,_dispatch_trace_queue_create函數
中的trace
漢語是追蹤
的意思,這像是我們平常做的埋點操作,太過復雜就不看了,重點是這個dq
(前面探索主隊列時這個dq
也出現了),這行代碼像是一個開辟內存并實例化的代碼(包含alloc
、init
):
dispatch_lane_t dq = _dispatch_object_alloc(vtable,
sizeof(struct dispatch_lane_s));
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
這個_dispatch_queue_init
的代碼貌似在前面探索全局隊列的時候出現過,點進去看看,dqf |= DQF_WIDTH(width);
根據dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1
穿的參數,可以確定
,如果是串行隊列
,那么DQF_WIDTH(1)
,這才定義串行隊列
的其中一個特定標識
,而不是serilnum
。
三、GCD關于queue的補充:
1、前文總結:
實際上研究了初步隊列的創建,從源碼上識別了主隊列、全局隊列、串行隊列、并發隊列,但我這里有個奇怪的地方,我們在create函數
外,需要返回的是個dispatch_queue_t
,結果在_dispatch_queue_init
構造函數和return的函數_dispatch_trace_queue_create
中返回的確是dispatch_queue_class_t
,而_dispatch_object_alloc
函數點到最后發現返回的是個_os_object_t
,好像有點亂,那么queue到底是個什么呢?
2、dispatch_queue_t繼承探究:
dispatch_queue_t serial = dispatch_queue_create("A", DISPATCH_QUEUE_SERIAL);
點進去dispatch_queue_t查詢,并依次查詢發現了一個宏定義:
DISPATCH_DECL(dispatch_queue);
#define DISPATCH_DECL(name) OS_OBJECT_DECL_SUBCLASS(name, dispatch_object)
到OS_OBJECT_DECL_SUBCLASS無法繼續查詢,到libDispatch.dylib庫中查詢發現了以下定義鏈路:
#define OS_OBJECT_DECL_SUBCLASS(name, super) \
OS_OBJECT_DECL_IMPL(name, NSObject, <OS_OBJECT_CLASS(super)>)
#define OS_OBJECT_CLASS(name) OS_##name
#define OS_OBJECT_DECL_IMPL(name, adhere, ...) \
OS_OBJECT_DECL_PROTOCOL(name, __VA_ARGS__) \
typedef adhere<OS_OBJECT_CLASS(name)> \
* OS_OBJC_INDEPENDENT_CLASS name##_t
#define OS_OBJECT_DECL_PROTOCOL(name, ...) \
@protocol OS_OBJECT_CLASS(name) __VA_ARGS__ \
@end
#define OS_OBJC_INDEPENDENT_CLASS __attribute__((objc_independent_class))
翻譯到最后是個這個
@protocol OS_dispatch_queue <OS_dispatch_object>
@end
typedef NSObject<OS_dispatch_queue> * dispatch_queue_t
找到這個協議 OS_dispatch_queue
#ifdef __OBJC__
@protocol OS_dispatch_queue;
#endif
// Lane cluster class: type for all the queues that have a single head/tail pair
typedef union {
struct dispatch_lane_s *_dl;
struct dispatch_queue_static_s *_dsq;
struct dispatch_queue_global_s *_dgq;
struct dispatch_queue_pthread_root_s *_dpq;
struct dispatch_source_s *_ds;
struct dispatch_channel_s *_dch;
struct dispatch_mach_s *_dm;
#ifdef __OBJC__
id<OS_dispatch_queue> _objc_dq; // unsafe cast for the sake of object.m
#endif
} dispatch_lane_class_t DISPATCH_TRANSPARENT_UNION;
// Dispatch queue cluster class: type for any dispatch_queue_t
typedef union {
struct dispatch_queue_s *_dq;
struct dispatch_workloop_s *_dwl;
struct dispatch_lane_s *_dl;
struct dispatch_queue_static_s *_dsq;
struct dispatch_queue_global_s *_dgq;
struct dispatch_queue_pthread_root_s *_dpq;
struct dispatch_source_s *_ds;
struct dispatch_channel_s *_dch;
struct dispatch_mach_s *_dm;
dispatch_lane_class_t _dlu;
#ifdef __OBJC__
id<OS_dispatch_queue> _objc_dq;
#endif
} dispatch_queue_class_t DISPATCH_TRANSPARENT_UNION;
#ifndef __OBJC__
typedef union {
struct _os_object_s *_os_obj;
struct dispatch_object_s *_do;
struct dispatch_queue_s *_dq;
struct dispatch_queue_attr_s *_dqa;
struct dispatch_group_s *_dg;
struct dispatch_source_s *_ds;
struct dispatch_channel_s *_dch;
struct dispatch_mach_s *_dm;
struct dispatch_mach_msg_s *_dmsg;
struct dispatch_semaphore_s *_dsema;
struct dispatch_data_s *_ddata;
struct dispatch_io_s *_dchannel;
struct dispatch_continuation_s *_dc;
struct dispatch_sync_context_s *_dsc;
struct dispatch_operation_s *_doperation;
struct dispatch_disk_s *_ddisk;
struct dispatch_workloop_s *_dwl;
struct dispatch_lane_s *_dl;
struct dispatch_queue_static_s *_dsq;
struct dispatch_queue_global_s *_dgq;
struct dispatch_queue_pthread_root_s *_dpq;
dispatch_queue_class_t _dqu;
dispatch_lane_class_t _dlu;
uintptr_t _do_value;
} dispatch_object_t DISPATCH_TRANSPARENT_UNION;
沒看懂,放以后研究,換個思路吧,全局搜索,發現還有個同樣的宏定義:
define DISPATCH_DECL(name) \
typedef struct name##_s : public dispatch_object_s {} *name##_t
帶入dispatch_queue,翻譯后即:
typedef struct dispatch_queue_s : public dispatch_object_s {} *dispatch_queue_t
翻譯的通俗易懂點,dispatch_queue_t
是dispatch_queue_s
結構體類型的,同時繼承于public dispatch_object_s {}
這個類型,再看下這段代碼:
typedef struct dispatch_object_s {
private:
dispatch_object_s();
~dispatch_object_s();
dispatch_object_s(const dispatch_object_s &);
void operator=(const dispatch_object_s &);
} *dispatch_object_t;
#define DISPATCH_DECL(name) \
typedef struct name##_s : public dispatch_object_s {} *name##_t
#define DISPATCH_DECL_SUBCLASS(name, base) \
typedef struct name##_s : public base##_s {} *name##_t
#define DISPATCH_GLOBAL_OBJECT(type, object) (static_cast<type>(&(object)))
#define DISPATCH_RETURNS_RETAINED
#else /* Plain C */
#ifndef __DISPATCH_BUILDING_DISPATCH__
typedef union {
struct _os_object_s *_os_obj;
struct dispatch_object_s *_do;
struct dispatch_queue_s *_dq;
struct dispatch_queue_attr_s *_dqa;
struct dispatch_group_s *_dg;
struct dispatch_source_s *_ds;
struct dispatch_channel_s *_dch;
struct dispatch_mach_s *_dm;
struct dispatch_mach_msg_s *_dmsg;
struct dispatch_semaphore_s *_dsema;
struct dispatch_data_s *_ddata;
struct dispatch_io_s *_dchannel;
} dispatch_object_t DISPATCH_TRANSPARENT_UNION;
得出最終繼承于dispatch_object_t
四、GCD線程函數源碼探究:
1、同步線程函數:
直接到Libdispatch.dylib中查詢
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
uintptr_t dc_flags = DC_FLAG_BLOCK;
if (unlikely(_dispatch_block_has_private_data(work))) {
return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
}
_dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
這里的unlikely
字面意思是不大可能發生
的,這里不看了,點擊查看_dispatch_sync_f
,這里傳遞了dispatch_queue_t 型
的dq
,dispatch_block_ t
型的work
,即隊列和隊列調度的任務
,另外傳遞了一個處理work的封裝函數
,貌似挺重要,先不管、點擊函數_dispatch_sync_f
查看,然后依次查看,函數鏈如下:
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
uintptr_t dc_flags)
{
_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
#mark 串行隊列
if (likely(dq->dq_width == 1)) {
#mark 出來個柵欄函數
return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
}
#mark 類型不符,報錯
if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
}
#mark 在queue的探索中,create方法里queue的alloc方法有看到這個dispatch_lane_t
dispatch_lane_t dl = upcast(dq)._dl;
// Global concurrent queues and queues bound to non-dispatch threads
// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
#mark 死鎖函數
return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
}
if (unlikely(dq->do_targetq->do_targetq)) {
#mark
return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
}
_dispatch_introspection_sync_begin(dl);
_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}
static void
_dispatch_sync_invoke_and_complete(dispatch_lane_t dq, void *ctxt,
dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
#mark 像是埋點追蹤什么的,不看
_dispatch_trace_item_complete(dc);
_dispatch_lane_non_barrier_complete(dq, 0);
}
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
dispatch_function_t func)
{
dispatch_thread_frame_s dtf;
_dispatch_thread_frame_push(&dtf, dq);
#mark 這個函數好熟悉,在文章開始打印方法堆棧時有看到,看下
_dispatch_client_callout(ctxt, func);
_dispatch_perfmon_workitem_inc();
_dispatch_thread_frame_pop(&dtf);
}
void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
_dispatch_get_tsd_base();
void *u = _dispatch_get_unwind_tsd();
if (likely(!u)) return f(ctxt);
_dispatch_set_unwind_tsd(NULL);
#mark 這里f把ctxt包進去了
f(ctxt);
_dispatch_free_unwind_tsd();
_dispatch_set_unwind_tsd(u);
}
最終發現f(ctxt)
,即任務work
被執行了,我們看下死鎖的情況,來一段主隊列同步的代碼:
dispatch_sync(dispatch_get_main_queue(), ^{
NSLog(@"test");
});
運行崩了,這是一種常見的死鎖情況,那么死鎖的函數調用鏈及邏輯是啥樣的呢?經過符號斷點多次嘗試,發現走到這個函數_dispatch_sync_f_slow
, 如下:
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
dispatch_function_t func, uintptr_t top_dc_flags,
dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
dispatch_queue_t top_dq = top_dqu._dq;
dispatch_queue_t dq = dqu._dq;
if (unlikely(!dq->do_targetq)) {
return _dispatch_sync_function_invoke(dq, ctxt, func);
}
pthread_priority_t pp = _dispatch_get_priority();
struct dispatch_sync_context_s dsc = {
.dc_flags = DC_FLAG_SYNC_WAITER | dc_flags,
.dc_func = _dispatch_async_and_wait_invoke,
.dc_ctxt = &dsc,
.dc_other = top_dq,
.dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,
.dc_voucher = _voucher_get(),
.dsc_func = func,
.dsc_ctxt = ctxt,
.dsc_waiter = _dispatch_tid_self(),
};
_dispatch_trace_item_push(top_dq, &dsc);
__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);
if (dsc.dsc_func == NULL) {
// dsc_func being cleared means that the block ran on another thread ie.
// case (2) as listed in _dispatch_async_and_wait_f_slow.
dispatch_queue_t stop_dq = dsc.dc_other;
return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
}
_dispatch_introspection_sync_begin(top_dq);
_dispatch_trace_item_pop(top_dq, &dsc);
_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
DISPATCH_TRACE_ARG(&dsc));
}
//這里是獲取當前隊列的線程id
#define _dispatch_tid_self() ((dispatch_tid)_dispatch_thread_port())
#define _dispatch_thread_port() pthread_mach_thread_np(_dispatch_thread_self())
#define _dispatch_thread_self() ((uintptr_t)pthread_self())
解除斷點后運行,崩潰,通過lldb指令bt打印函數堆棧:
發現走到了_dispatch_sync_f_slow
函數中的這條函數__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq)
, 點進去看看
static void
__DISPATCH_WAIT_FOR_QUEUE__(dispatch_sync_context_t dsc, dispatch_queue_t dq)
{
uint64_t dq_state = _dispatch_wait_prepare(dq);
if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {
DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,
"dispatch_sync called on queue "
"already owned by current thread");
#mark 同步函數喚起了一個已經被當前線程占用的隊列
}
...
}
看下判斷條件
_dq_state_drain_locked_by(uint64_t dq_state, dispatch_tid tid)
{
return _dispatch_lock_is_locked_by((dispatch_lock)dq_state, tid);
}
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{
// equivalent to _dispatch_lock_owner(lock_value) == tid
return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}
這個與判斷((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0
中DLOCK_OWNER_MASK
是一個很大的數不為0,顯然只有(lock_value ^ tid)==0
才能上述判斷成立,lock_value由調用鏈路看出是最外層傳進來的dq關聯的,而tid是當前的隊列的線程id,兩者相同就會判斷成立從而死鎖錯誤。
這里有點抽象,怎么去理解這里的死鎖
呢?即我當前一個等待(被調度)
的線程和當前隊列中已經在等待(執行)
的線程是同一個線程,就會死鎖。
2、異步線程函數
在Libdispatch.dylib中查詢
void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
dispatch_continuation_t dc = _dispatch_continuation_alloc();
uintptr_t dc_flags = DC_FLAG_CONSUME;
dispatch_qos_t qos;
qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
先看qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
這句代碼,點擊_dispatch_continuation_init
函數依次往下的函數鏈:
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, dispatch_block_t work,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
void *ctxt = _dispatch_Block_copy(work);
dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
if (unlikely(_dispatch_block_has_private_data(work))) {
dc->dc_flags = dc_flags;
dc->dc_ctxt = ctxt;
// will initialize all fields but requires dc_flags & dc_ctxt to be set
return _dispatch_continuation_init_slow(dc, dqu, flags);
}
dispatch_function_t func = _dispatch_Block_invoke(work);
if (dc_flags & DC_FLAG_CONSUME) {
func = _dispatch_call_block_and_release;
}
return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}
static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
pthread_priority_t pp = 0;
dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
#mark 這看到任務ctxt和f都賦值給了dc
dc->dc_func = f;
dc->dc_ctxt = ctxt;
// in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
// should not be propagated, only taken from the handler if it has one
if (!(flags & DISPATCH_BLOCK_HAS_PRIORITY)) {
pp = _dispatch_priority_propagate();
}
_dispatch_continuation_voucher_set(dc, flags);
#mark 這里只需要dc了
return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}
static inline dispatch_qos_t
#mark 大致是優先級的設置(異步開啟線程,肯定會有優先級的區分)
_dispatch_continuation_priority_set(dispatch_continuation_t dc,
dispatch_queue_class_t dqu,
pthread_priority_t pp, dispatch_block_flags_t flags)
{
dispatch_qos_t qos = DISPATCH_QOS_UNSPECIFIED;
#if HAVE_PTHREAD_WORKQUEUE_QOS
dispatch_queue_t dq = dqu._dq;
if (likely(pp)) {
bool enforce = (flags & DISPATCH_BLOCK_ENFORCE_QOS_CLASS);
bool is_floor = (dq->dq_priority & DISPATCH_PRIORITY_FLAG_FLOOR);
bool dq_has_qos = (dq->dq_priority & DISPATCH_PRIORITY_REQUESTED_MASK);
if (enforce) {
pp |= _PTHREAD_PRIORITY_ENFORCE_FLAG;
qos = _dispatch_qos_from_pp_unsafe(pp);
} else if (!is_floor && dq_has_qos) {
pp = 0;
} else {
qos = _dispatch_qos_from_pp_unsafe(pp);
}
}
dc->dc_priority = pp;
#else
(void)dc; (void)dqu; (void)pp; (void)flags;
#endif
return qos;
}
整理一下,_dispatch_continuation_init
是一個對任務優先級處理了的
函數,并將結果返回給qos
接收,那么接下來看_dispatch_continuation_async
這個函數,點擊查看函數鏈:
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
_dispatch_trace_item_push(dqu, dc);
}
#else
(void)dc_flags;
#endif
return dx_push(dqu._dq, dc, qos);
}
到了dx_push是個宏定義,一次點擊查詢:
#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)
直接看dq_push
,庫中全局搜索,為方便研究多線程的處理,挑選這段函數
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
.do_type = DISPATCH_QUEUE_CONCURRENT_TYPE,
.do_dispose = _dispatch_lane_dispose,
.do_debug = _dispatch_queue_debug,
.do_invoke = _dispatch_lane_invoke,
.dq_activate = _dispatch_lane_activate,
.dq_wakeup = _dispatch_lane_wakeup,
.dq_push = _dispatch_lane_concurrent_push,
);
_dispatch_lane_concurrent_push
在并發隊列里被當作dx_push,翻譯下return dx_push(dqu._dq, dc, qos);
,即_dispatch_lane_concurrent_push(dqu._dq, dc, qos)
,庫中搜索查詢發現這段函數:
void
_dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou,
dispatch_qos_t qos)
{
// <rdar://problem/24738102&24743140> reserving non barrier width
// doesn't fail if only the ENQUEUED bit is set (unlike its barrier
// width equivalent), so we have to check that this thread hasn't
// enqueued anything ahead of this call or we can break ordering
if (dq->dq_items_tail == NULL &&
!_dispatch_object_is_waiter(dou) &&
!_dispatch_object_is_barrier(dou) &&
_dispatch_queue_try_acquire_async(dq)) {
return _dispatch_continuation_redirect_push(dq, dou, qos);
}
#mark 這個跟serial隊列的dx_push一樣,_dispatch_object_is_barrier(dou) == true也能走到這里,回頭看
_dispatch_lane_push(dq, dou, qos);
}
符號斷點_dispatch_continuation_redirect_push
調試發現走了這里,點進去看看
static void
_dispatch_continuation_redirect_push(dispatch_lane_t dl,
dispatch_object_t dou, dispatch_qos_t qos)
{
if (likely(!_dispatch_object_is_redirection(dou))) {
dou._dc = _dispatch_async_redirect_wrap(dl, dou);
} else if (!dou._dc->dc_ctxt) {
// find first queue in descending target queue order that has
// an autorelease frequency set, and use that as the frequency for
// this continuation.
dou._dc->dc_ctxt = (void *)
(uintptr_t)_dispatch_queue_autorelease_frequency(dl);
}
#mark 獲取的是dl.do_targetq
dispatch_queue_t dq = dl->do_targetq;
if (!qos) qos = _dispatch_priority_qos(dq->dq_priority);
#mark 這里的dq是不是上級傳過來的dq
dx_push(dq, dou, qos);
}
這里的dx_push
里傳入的是dl.do_targetq
,這我們在queue的create函數里有看到過,注意這段代碼:
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
dispatch_queue_t tq, bool legacy)
{
if (!tq) {
tq = _dispatch_get_root_queue(
qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,
overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;
if (unlikely(!tq)) {
DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute");
}
}
....
dq->do_targetq = tq;
....
}
全局搜一下_dispatch_get_root_queue
:
#mark 注意這里返回的是個globalqueue
static inline dispatch_queue_global_t
_dispatch_get_root_queue(dispatch_qos_t qos, bool overcommit)
{
if (unlikely(qos < DISPATCH_QOS_MIN || qos > DISPATCH_QOS_MAX)) {
DISPATCH_CLIENT_CRASH(qos, "Corrupted priority");
}
return &_dispatch_root_queues[2 * (qos - 1) + overcommit];
}
完美,實際上這里dx_push
傳入的dq
是globalQueue
,進而到這里:
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane,
.do_type = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE,
.do_dispose = _dispatch_object_no_dispose,
.do_debug = _dispatch_queue_debug,
.do_invoke = _dispatch_object_no_invoke,
.dq_activate = _dispatch_queue_no_activate,
.dq_wakeup = _dispatch_root_queue_wakeup,
.dq_push = _dispatch_root_queue_push,
);
取 _dispatch_root_queue_push
,搜索_dispatch_root_queue_push
函數:
void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
dispatch_qos_t qos)
{
...
_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
dispatch_object_t _head, dispatch_object_t _tail, int n)
{
struct dispatch_object_s *hd = _head._do, *tl = _tail._do;
if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
return _dispatch_root_queue_poke(dq, n, 0);
}
}
void
_dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor)
{
...
return _dispatch_root_queue_poke_slow(dq, n, floor);
}
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
...
#if !defined(_WIN32)
pthread_attr_t *attr = &pqc->dpq_thread_attr;
pthread_t tid, *pthr = &tid;
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
if (unlikely(dq == &_dispatch_mgr_root_queue)) {
pthr = _dispatch_mgr_root_queue_init();
}
#endif
do {
_dispatch_retain(dq); // released in _dispatch_worker_thread
while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
if (r != EAGAIN) {
(void)dispatch_assume_zero(r);
}
_dispatch_temporary_resource_shortage();
}
} while (--remaining);
#else // defined(_WIN32)
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
if (unlikely(dq == &_dispatch_mgr_root_queue)) {
_dispatch_mgr_root_queue_init();
}
#endif
do {
_dispatch_retain(dq); // released in _dispatch_worker_thread
#if DISPATCH_DEBUG
unsigned dwStackSize = 0;
#else
unsigned dwStackSize = 64 * 1024;
#endif
uintptr_t hThread = 0;
while (!(hThread = _beginthreadex(NULL, dwStackSize, _dispatch_worker_thread_thunk, dq, STACK_SIZE_PARAM_IS_A_RESERVATION, NULL))) {
if (errno != EAGAIN) {
(void)dispatch_assume(hThread);
}
_dispatch_temporary_resource_shortage();
}
#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
if (_dispatch_mgr_sched.prio > _dispatch_mgr_sched.default_prio) {
(void)dispatch_assume_zero(SetThreadPriority((HANDLE)hThread, _dispatch_mgr_sched.prio) == TRUE);
}
#endif
CloseHandle((HANDLE)hThread);
} while (--remaining);
#endif // defined(_WIN32)
#else
(void)floor;
#endif // DISPATCH_USE_PTHREAD_POOL
}
終于看到了pthread_create、hThread = _beginthreadex
之類的thread
相關的代碼了,這就是異步函數中的最終的線程創建。
五、不總結了。
下結合一些題目實戰說說GCD