GCD源碼吐血分析(2)——dispatch_async/dispatch_sync/dispatch_once/dispatch group

上一章中,我們知道了獲取GCD queue的底層實現。獲取到queue後,就需要將任務提交到queue中進行處理。
我們有兩種方式提交任務:
dispatch_asyncdispatch_sync。一個是異步不等待任務完成就返回,另一個是同步任務,需要等待任務完成。這兩種提交任務的方式有所不同:

dispatch_async :底層運用了線程池,會在和當前線程不同的線程上處理任務。

dispatch_sync :一般不會新開啓線程,而是在當前線程執行任務(比較特殊的是main queue,它會利用main runloop 將任務提交到主線程來執行),同時,它會阻塞當前線程,等待提交的任務執行完畢。當target queue是併發線程時,會直接執行任務。而target queue是串行隊列時,會檢測當前線程是否已經擁有了該串行隊列,如果答案是肯定的,則會觸發crash,這與老版本GCD中會觸發死鎖不同,因爲在新版GCD中,已經加入了這種死鎖檢測機制,從而觸發crash,避免了調試困難的死鎖的發生。

如下圖所示,當我們在同一線程中的串行隊列任務執行期間,再次向該隊列提交任務時,會引發crash。

這裏寫圖片描述

但是,目前這種檢測死鎖的機制也不是完美的,我們仍可以繞過crash來引發死鎖(是不是沒事找事兒?),具體可見下面的源碼分析。

dispatch_sync

void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_sync_f(dq, work, _dispatch_Block_invoke(work));
}

void
dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
{
	// 串行隊列走這裏
		return dispatch_barrier_sync_f(dq, ctxt, func);
	}
	// 並行隊列走這裏
	_dispatch_sync_invoke_and_complete(dq, ctxt, func);
}

先來看一下串行隊列會執行的dispatch_barrier_sync_f

DISPATCH_NOINLINE
void
dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func)
{
	dispatch_tid tid = _dispatch_tid_self(); // 獲取當前thread id
	if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dq, tid))) { // 當前線程嘗試綁定獲取串行隊列的lock 
		return _dispatch_sync_f_slow(dq, ctxt, func, DISPATCH_OBJ_BARRIER_BIT); // 線程獲取不到queue的lock,則串行入隊等待,當前線程阻塞
	}

	// 不需要等待,則走這裏
	_dispatch_queue_barrier_sync_invoke_and_complete(dq, ctxt, func);
}

我們重點看一下線程是如何嘗試獲取串行隊列lock的,這很重要,這一步是後面的死鎖檢測的基礎

DISPATCH_ALWAYS_INLINE DISPATCH_WARN_RESULT
static inline bool
_dispatch_queue_try_acquire_barrier_sync(dispatch_queue_t dq, uint32_t tid)
{
	uint64_t init  = DISPATCH_QUEUE_STATE_INIT_VALUE(dq->dq_width);
	uint64_t value = DISPATCH_QUEUE_WIDTH_FULL_BIT | DISPATCH_QUEUE_IN_BARRIER |
			_dispatch_lock_value_from_tid(tid); // _dispatch_lock_value_from_tid 會去取tid二進制數的2到31位 作爲值(從0位算起)
	uint64_t old_state, new_state;

	// 這裏面有一堆宏定義的原子操作,事實是
	// 嘗試將new_state賦值給dq.dq_state。 首先會用原子操作(atomic_load_explicit)取當前dq_state的值,作爲old_state。如果old_state 不是dq_state的默認值(init | role), 則賦值失敗,返回false(這說明之前已經有人更改過dq_state,在串行隊列中,一次僅允許一個人更改dq_state), 獲取lock失敗。否則dq_state賦值爲new_state(利用原子操作atomic_compare_exchange_weak_explicit 做賦值), 返回true,獲取lock成功。
	return os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, acquire, {
		uint64_t role = old_state & DISPATCH_QUEUE_ROLE_MASK;
		if (old_state != (init | role)) { // 如果dq_state已經被修改過,則直接返回false,不更新dq_state爲new_state
			os_atomic_rmw_loop_give_up(break);
		}
		new_state = value | role;
	});
}

上面代碼會去取dispatch_queue_t中的dq_state值。當這個dq_state沒有被別人修改過,即第一次被修改時,會將dq_state設置爲new_state, 並返回true。此時,在new_state中標記了當前的queue被lock,同時記錄了lock 當前queue的線程tid。

如果dq_state已經被修改過了, 則函數返回false,同時,保持dq_state值不變。

看過了_dispatch_queue_try_acquire_barrier_sync的內部實現,我們再回到上一級dispatch_barrier_sync_f中:

DISPATCH_NOINLINE
void
dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func)
{
	dispatch_tid tid = _dispatch_tid_self(); // 獲取當前thread id
	if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dq, tid))) { // 當前線程嘗試綁定獲取串行隊列的lock 
		return _dispatch_sync_f_slow(dq, ctxt, func, DISPATCH_OBJ_BARRIER_BIT); // 線程獲取不到queue的lock,則串行入隊等待,當前線程阻塞
	}

	...
}

如果_dispatch_queue_try_acquire_barrier_sync 返回了false,則會進入到_dispatch_sync_f_slow中,在這裏會嘗試等待串行隊列中上一個任務執行完畢:

DISPATCH_NOINLINE
static void
_dispatch_sync_f_slow(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	 
	if (unlikely(!dq->do_targetq)) { // 如果dq沒有target queue,走這裏。這種情況多半不會發生,因爲所有自定義創建的queue都有target queue是root queue之一
		return _dispatch_sync_function_invoke(dq, ctxt, func);
	}
	// 多數會走這裏
	_dispatch_sync_wait(dq, ctxt, func, dc_flags, dq, dc_flags);
}

我們來看一下_dispatch_sync_wait的實現的超級簡略版:

DISPATCH_NOINLINE
static void
_dispatch_sync_wait(dispatch_queue_t top_dq, void *ctxt,
		dispatch_function_t func, uintptr_t top_dc_flags,
		dispatch_queue_t dq, uintptr_t dc_flags)
{
	pthread_priority_t pp = _dispatch_get_priority();
	dispatch_tid tid = _dispatch_tid_self();
	dispatch_qos_t qos;
	uint64_t dq_state;
	// Step 1. 檢測是否會發生死鎖,若會發生死鎖,則直接crash
	dq_state = _dispatch_sync_wait_prepare(dq);
	// 如果當前的線程已經擁有目標queue,這時候在調用_dispatch_sync_wait,則會觸發crash
	// 這裏的判斷邏輯是lock的woner是否是tid(這裏因爲在dq_state的lock裏面加入了tid的值,所有能夠自動識別出死鎖的情況:同一個串行隊列被同一個線程做兩次lock)
	if (unlikely(_dq_state_drain_locked_by(dq_state, tid))) {
		DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,
				"dispatch_sync called on queue "
				"already owned by current thread");
	}

	// Step2. _dispatch_queue_push_sync_waiter(dq, &dsc, qos); // 將要執行的任務入隊

	// Step3.  等待前面的task執行完畢
	if (dsc.dc_data == DISPATCH_WLH_ANON) {
		// 等待線程事件,等待完成(進入dispach_sync的模式)
		_dispatch_thread_event_wait(&dsc.dsc_event); // 信號量等待
		_dispatch_thread_event_destroy(&dsc.dsc_event); // 等待結束 銷燬thread event
		// If _dispatch_sync_waiter_wake() gave this thread an override,
		// ensure that the root queue sees it.
		if (dsc.dsc_override_qos > dsc.dsc_override_qos_floor) {
			_dispatch_set_basepri_override_qos(dsc.dsc_override_qos);
		}
	} else {
		_dispatch_event_loop_wait_for_ownership(&dsc);
	}
	
	// Step4. 
	// 等待結束,執行client代碼
	_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags);
}

這裏我們重點看一下,GCD在Step1中是如何檢測死鎖的。其最終會調用函數

DISPATCH_ALWAYS_INLINE
static inline bool
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{
	// equivalent to _dispatch_lock_owner(lock_value) == tid
	return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}

lock_value就是dq_state,一個32位的整數。通過判斷((lock_value ^ tid) & DLOCK_OWNER_MASK)是否爲0,來判斷當前的串行隊列是否已被同一個線程所獲取。如果當前隊列已經被當前線程獲取,即當前線程在執行一個串行任務中,如果此時我們在阻塞等待一個新的串行任務,則會發生死鎖。因此,在新版GCD中,當((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0 時,就會主動觸發crash來避免死鎖。

串行隊列死鎖的情況是:

線程A在串行隊列dq中執行串行任務task1的過程中,如果再向dq中投遞串行任務task2,同時還要求必須阻塞當前線程,來等待task2結束(調用dispatch_sync投遞task2),那麼這時候會發生死鎖。

因爲這時候task1還沒有結束,串行隊列不會取執行task2,而我們又要在當前線程等待task2的結束才肯繼續執行task1,即task1在等task2,而task2也在等task1,循環等待,形成死鎖。

總結一下,GCD會發生死鎖的情況必須同時滿足3個條件,纔會形成task1和task2相互等待的情況:

  1. 串行隊列正在執行task1
  2. 在task1中又向串行隊列投遞了task2
  3. task2是以dispatch_sync 方式投遞的,會阻塞當前線程

其實在GCD的死鎖檢測中,並沒有完全覆蓋以上3個條件。因爲GCD對於條件2,附加加了條件限制,即task2是在task1的執行線程中提交的。而條件2其實是沒有這個限制的,task2可以在和task1不同的線程中提交,同樣可以造成死鎖。因此,在這種情況下的死鎖,GCD是檢測不出來的,也就不會crash,僅僅是死鎖。

以下是GCD會檢測出的死鎖以及不會檢測出的死鎖,可以自己體會一下:

  // 串行隊列死鎖crash的例子(在同個線程的串行隊列任務執行過程中,再次發送dispatch_sync 任務到串行隊列,會crash)
  //==============================
    dispatch_queue_t sQ = dispatch_queue_create("st0", 0);
    dispatch_async(sQ, ^{
        NSLog(@"Enter");
        dispatch_sync(sQ, ^{   //  這裏會crash
            NSLog(@"sync task");
        });
    });

   // 串行死鎖的例子(這裏不會crash,在線程A執行串行任務task1的過程中,又在線程B中投遞了一個task2到串行隊列同時使用dispatch_sync等待,死鎖,但GCD不會測出)
    //==============================
    dispatch_queue_t sQ1 = dispatch_queue_create("st01", 0);
    dispatch_async(sQ1, ^{
        NSLog(@"Enter");
        dispatch_sync(dispatch_get_main_queue(), ^{
            dispatch_sync(sQ1, ^{
                NSArray *a = [NSArray new];
                NSLog(@"Enter again %@", a);
            });
        });
        NSLog(@"Done");
    });

在邏輯結構良好的情況下,串行隊列不會發生死鎖,而只是task1,task2依次執行:

    // 串行隊列等待的例子1
    //==============================
    dispatch_queue_t sQ1 = dispatch_queue_create("st01", 0);
    dispatch_async(sQ1, ^{
        NSLog(@"Enter");
        sleep(5);
        NSLog(@"Done");
    });
    dispatch_sync(sQ1, ^{
        NSLog(@"It is my turn");
    });

再來看並行隊列會走的分支

DISPATCH_NOINLINE
static void
_dispatch_sync_invoke_and_complete(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func)
{
	_dispatch_sync_function_invoke_inline(dq, ctxt, func); // call clinet function
	_dispatch_queue_non_barrier_complete(dq); // 結束
}

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func)
{
	dispatch_thread_frame_s dtf;
	_dispatch_thread_frame_push(&dtf, dq); // 保護現場
	_dispatch_client_callout(ctxt, func); // 回調到client
	_dispatch_perfmon_workitem_inc();
	_dispatch_thread_frame_pop(&dtf);
}

可見,並行隊列不會創建線程取執行dispatch_sync命令。

dispatch_async

自定義串行隊列的async派發

 dispatch_queue_t sq1 = dispatch_queue_create("sq1", NULL);
    dispatch_async(sq1, ^{
        NSLog(@"Serial aysnc task");
    });

他的調用堆棧是:
這裏寫圖片描述

我們來看一下源碼:

void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	// 設置標誌位
	uintptr_t dc_flags = DISPATCH_OBJ_CONSUME_BIT;
	// 將work打包成dispatch_continuation_t
	_dispatch_continuation_init(dc, dq, work, 0, 0, dc_flags);
	_dispatch_continuation_async(dq, dc);
}

無論dq是什麼類型的queue,GCD首先會將work打包成dispatch_continuation_t 類型,然後調用方法_dispatch_continuation_async

在繼續深入之前,先來看一下work是如何打包的:

static inline void
_dispatch_continuation_init(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, dispatch_block_t work,
		pthread_priority_t pp, dispatch_block_flags_t flags, uintptr_t dc_flags)
{
	dc->dc_flags = dc_flags | DISPATCH_OBJ_BLOCK_BIT;
	// 將work封裝到dispatch_continuation_t中
	dc->dc_ctxt = _dispatch_Block_copy(work);
	_dispatch_continuation_priority_set(dc, pp, flags);

	if (unlikely(_dispatch_block_has_private_data(work))) {
		// always sets dc_func & dc_voucher
		// may update dc_priority & do_vtable
		return _dispatch_continuation_init_slow(dc, dqu, flags);
	}

	if (dc_flags & DISPATCH_OBJ_CONSUME_BIT) { // 之前的flag 被設置爲DISPATCH_OBJ_CONSUME_BIT,因此會走這裏
		dc->dc_func = _dispatch_call_block_and_release; // 這裏是設置dc的功能函數:1. 執行block 2. release block對象
	} else {
		dc->dc_func = _dispatch_Block_invoke(work);
	}
	_dispatch_continuation_voucher_set(dc, dqu, flags);
}

我們重點關注dc->dc_func = _dispatch_call_block_and_release方法,它會在dispatch_continuation_t dc被執行時調用:

void
_dispatch_call_block_and_release(void *block)
{
	void (^b)(void) = block;
	b();
	Block_release(b);
}

dc_func的邏輯也很簡單:執行block,釋放block。

看完了work是如何打包成dispatch_continuation_t的,我們回過頭來繼續看_dispatch_continuation_async,它接受兩個參數:work queue以及dispatch_continuation_t 形式的work:

void
_dispatch_continuation_async(dispatch_queue_t dq, dispatch_continuation_t dc)
{
	_dispatch_continuation_async2(dq, dc,
			dc->dc_flags & DISPATCH_OBJ_BARRIER_BIT);
}

static inline void
_dispatch_continuation_async2(dispatch_queue_t dq, dispatch_continuation_t dc,
		bool barrier)
{
	// 如果是用barrier插進來的任務或者是串行隊列,直接將任務加入到隊列
	// #define DISPATCH_QUEUE_USES_REDIRECTION(width) \
	//    ({ uint16_t _width = (width); \
	//    _width > 1 && _width < DISPATCH_QUEUE_WIDTH_POOL; })
	if (fastpath(barrier || !DISPATCH_QUEUE_USES_REDIRECTION(dq->dq_width)))
		return _dispatch_continuation_push(dq, dc); // 入隊
	}
	return _dispatch_async_f2(dq, dc); // 並行隊列走這裏
}

執行到_dispatch_continuation_async2時,就出現了分支:(1)串行(barrier)執行_dispatch_continuation_push (2)並行執行_dispatch_async_f2

我們這裏關注的是自定義串行隊列的分支,因此繼續看_dispatch_continuation_push這一支。

static void
_dispatch_continuation_push(dispatch_queue_t dq, dispatch_continuation_t dc)
{
	dx_push(dq, dc, _dispatch_continuation_override_qos(dq, dc));
}
// dx_push是一個宏定義:
#define dx_push(x, y, z) dx_vtable(x)->do_push(x, y, z)
#define dx_vtable(x) (&(x)->do_vtable->_os_obj_vtable)

會調用dq的do_push方法,可以在init.c中查看到do_push的定義:

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_serial, queue,
	.do_type = DISPATCH_QUEUE_SERIAL_TYPE,
	.do_kind = "serial-queue",
	.do_dispose = _dispatch_queue_dispose,
	.do_suspend = _dispatch_queue_suspend,
	.do_resume = _dispatch_queue_resume,
	.do_finalize_activation = _dispatch_queue_finalize_activation,
	.do_push = _dispatch_queue_push,
	.do_invoke = _dispatch_queue_invoke,
	.do_wakeup = _dispatch_queue_wakeup,
	.do_debug = dispatch_queue_debug,
	.do_set_targetq = _dispatch_queue_set_target_queue,
);

查看_dispatch_queue_push的定義:

void
_dispatch_queue_push(dispatch_queue_t dq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
	_dispatch_queue_push_inline(dq, dou, qos);
}

#define _dispatch_queue_push_inline _dispatch_trace_queue_push_inline

static inline void
_dispatch_trace_queue_push_inline(dispatch_queue_t dq, dispatch_object_t _tail,
		dispatch_qos_t qos)
{
	if (slowpath(DISPATCH_QUEUE_PUSH_ENABLED())) {
		struct dispatch_object_s *dou = _tail._do;
		_dispatch_trace_continuation(dq, dou, DISPATCH_QUEUE_PUSH);
	}
	
	_dispatch_introspection_queue_push(dq, _tail); // 第一個push似乎是爲了監聽dq入隊(enqueue)的消息
	_dispatch_queue_push_inline(dq, _tail, qos);  // 第二個push纔是將dq入隊, 這裏的_tail,實質是_dispatch_continuation_t 類型
}

static inline void
_dispatch_queue_push_inline(dispatch_queue_t dq, dispatch_object_t _tail,
		dispatch_qos_t qos)
{
	struct dispatch_object_s *tail = _tail._do;
	dispatch_wakeup_flags_t flags = 0;
	bool overriding = _dispatch_queue_need_override_retain(dq, qos);
	if (unlikely(_dispatch_queue_push_update_tail(dq, tail))) { // 將tail放入到dq中
		if (!overriding) _dispatch_retain_2(dq->_as_os_obj);
		_dispatch_queue_push_update_head(dq, tail);
		flags = DISPATCH_WAKEUP_CONSUME_2 | DISPATCH_WAKEUP_MAKE_DIRTY;
	} else if (overriding) {
		flags = DISPATCH_WAKEUP_CONSUME_2;
	} else {
		return;
	}
	return dx_wakeup(dq, qos, flags);
}

這裏,會將我們提交的任務,放到dq的隊尾。將任務入隊後,則調用dx_wakeup方法喚醒dq:

#define dx_wakeup(x, y, z) dx_vtable(x)->do_wakeup(x, y, z)

同樣,查看init.c文件,do_wakeup定義:

void
_dispatch_queue_wakeup(dispatch_queue_t dq, dispatch_qos_t qos,
		dispatch_wakeup_flags_t flags)
{
	dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;

	if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
		return _dispatch_queue_barrier_complete(dq, qos, flags);
	}
	if (_dispatch_queue_class_probe(dq)) { // 如果dq中有任務,則target = DISPATCH_QUEUE_WAKEUP_TARGET. 當我們第一次進入_dispatch_queue_wakeup時,dq是我們自定義的dq,會進入這裏
		target = DISPATCH_QUEUE_WAKEUP_TARGET;
	}
	return _dispatch_queue_class_wakeup(dq, qos, flags, target);
}

當dq是我們自定義時,因爲之前我們已經將任務入隊,因此dq中肯定有任務,因此target 被設置爲了 DISPATCH_QUEUE_WAKEUP_TARGET

由於第一次進入時target != NULL, 因此我們刪除無關代碼:

void
_dispatch_queue_class_wakeup(dispatch_queue_t dq, dispatch_qos_t qos,
		dispatch_wakeup_flags_t flags, dispatch_queue_wakeup_target_t target)
{
	dispatch_assert(target != DISPATCH_QUEUE_WAKEUP_WAIT_FOR_EVENT);

	if (target && !(flags & DISPATCH_WAKEUP_CONSUME_2)) {
		_dispatch_retain_2(dq);
		flags |= DISPATCH_WAKEUP_CONSUME_2;
	}

	// 這裏target 如果dq是root queue大概率爲null,否則,target == DISPATCH_QUEUE_WAKEUP_TARGET, 調用_dispatch_queue_push_queue,將自定義dq入隊,然後會在調用一遍wake up,最終在root queue中執行方法
	if (target) {
		uint64_t old_state, new_state, enqueue = DISPATCH_QUEUE_ENQUEUED;
		qos = _dispatch_queue_override_qos(dq, qos);
		os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, release, {
			new_state = _dq_state_merge_qos(old_state, qos);
			if (likely(!_dq_state_is_suspended(old_state) &&
					!_dq_state_is_enqueued(old_state) &&
					(!_dq_state_drain_locked(old_state) ||
					(enqueue != DISPATCH_QUEUE_ENQUEUED_ON_MGR &&
					_dq_state_is_base_wlh(old_state))))) {
				new_state |= enqueue;
			}
			if (flags & DISPATCH_WAKEUP_MAKE_DIRTY) {
				new_state |= DISPATCH_QUEUE_DIRTY;
			} else if (new_state == old_state) {
				os_atomic_rmw_loop_give_up(goto done);
			}
		});

		if (likely((old_state ^ new_state) & enqueue)) {
			dispatch_queue_t tq;
			if (target == DISPATCH_QUEUE_WAKEUP_TARGET) {
				os_atomic_thread_fence(dependency);
				tq = os_atomic_load_with_dependency_on2o(dq, do_targetq,
						(long)new_state);
			}
			dispatch_assert(_dq_state_is_enqueued(new_state));
			return _dispatch_queue_push_queue(tq, dq, new_state); // 將dq push到target queue中,並再次調用wake up 方法,tq作爲dq傳入
		}
}

第一次進入wake up方法時,GCD會調用_dispatch_queue_push_queue 方法將自定義dq入隊到target queue中,即root queue中:

static inline void
_dispatch_queue_push_queue(dispatch_queue_t tq, dispatch_queue_t dq,
		uint64_t dq_state)
{
	return dx_push(tq, dq, _dq_state_max_qos(dq_state));
}

這裏又調用了dx_push方法,會將我們自定義的dq加入到target queue中。

那麼,我們的work,會在root queue中什麼時候被執行呢?
我們會看一下調用堆棧:
這裏寫圖片描述

其實,root queue中維護了一個線程池,當線程執行方法時,會調用_dispatch_worker_thread3方法。爲什麼會調用thread3方法?線程池是如何創建的?這些問題我們稍後再提,現在,我們先看_dispatch_worker_thread3的實現:

static void
_dispatch_worker_thread3(pthread_priority_t pp)
{
	bool overcommit = pp & _PTHREAD_PRIORITY_OVERCOMMIT_FLAG;
	dispatch_queue_t dq;
	pp &= _PTHREAD_PRIORITY_OVERCOMMIT_FLAG | ~_PTHREAD_PRIORITY_FLAGS_MASK;
	_dispatch_thread_setspecific(dispatch_priority_key, (void *)(uintptr_t)pp);
	dq = _dispatch_get_root_queue(_dispatch_qos_from_pp(pp), overcommit); // 根據thread pripority和是否overcommit,取出root queue數組中對應的root queue
	return _dispatch_worker_thread4(dq); // 最終會調用它
}
static void
_dispatch_worker_thread4(void *context)
{
	dispatch_queue_t dq = context;
	dispatch_root_queue_context_t qc = dq->do_ctxt;

	_dispatch_introspection_thread_add();
	int pending = os_atomic_dec2o(qc, dgq_pending, relaxed);
	dispatch_assert(pending >= 0);
	_dispatch_root_queue_drain(dq, _dispatch_get_priority()); // 將root queue的所有任務都drain(傾倒),並執行
	_dispatch_voucher_debug("root queue clear", NULL);
	_dispatch_reset_voucher(NULL, DISPATCH_THREAD_PARK);
}

我們來看root queue是如何被‘傾倒’的:

static void
_dispatch_root_queue_drain(dispatch_queue_t dq, pthread_priority_t pp)
{
#if DISPATCH_DEBUG
	dispatch_queue_t cq;
	if (slowpath(cq = _dispatch_queue_get_current())) {
		DISPATCH_INTERNAL_CRASH(cq, "Premature thread recycling");
	}
#endif
	_dispatch_queue_set_current(dq); // 設置dispatch thread的當前queue是dq
	dispatch_priority_t pri = dq->dq_priority;
	if (!pri) pri = _dispatch_priority_from_pp(pp);
	dispatch_priority_t old_dbp = _dispatch_set_basepri(pri);
	_dispatch_adopt_wlh_anon();

	struct dispatch_object_s *item;
	bool reset = false;
	dispatch_invoke_context_s dic = { };
#if DISPATCH_COCOA_COMPAT
	_dispatch_last_resort_autorelease_pool_push(&dic);
#endif // DISPATCH_COCOA_COMPAT
	dispatch_invoke_flags_t flags = DISPATCH_INVOKE_WORKER_DRAIN |
			DISPATCH_INVOKE_REDIRECTING_DRAIN;
	_dispatch_queue_drain_init_narrowing_check_deadline(&dic, pri);
	_dispatch_perfmon_start();
	while ((item = fastpath(_dispatch_root_queue_drain_one(dq)))) { // 拿出queue中的一個item
		if (reset) _dispatch_wqthread_override_reset();
		_dispatch_continuation_pop_inline(item, &dic, flags, dq); // 執行這個item
		reset = _dispatch_reset_basepri_override();
		if (unlikely(_dispatch_queue_drain_should_narrow(&dic))) {
			break;
		}
	}

	// overcommit or not. worker thread
	if (pri & _PTHREAD_PRIORITY_OVERCOMMIT_FLAG) {
		_dispatch_perfmon_end(perfmon_thread_worker_oc);
	} else {
		_dispatch_perfmon_end(perfmon_thread_worker_non_oc);
	}

#if DISPATCH_COCOA_COMPAT
	_dispatch_last_resort_autorelease_pool_pop(&dic);
#endif // DISPATCH_COCOA_COMPAT
	_dispatch_reset_wlh();
	_dispatch_reset_basepri(old_dbp);
	_dispatch_reset_basepri_override();
	_dispatch_queue_set_current(NULL);  // 設置dispatch thread的當前queue是NULL
}

代碼很多,核心是中間的while循環:

while ((item = fastpath(_dispatch_root_queue_drain_one(dq)))) { // 拿出queue中的一個item
		if (reset) _dispatch_wqthread_override_reset();
		_dispatch_continuation_pop_inline(item, &dic, flags, dq); // 執行這個item
		reset = _dispatch_reset_basepri_override();
		if (unlikely(_dispatch_queue_drain_should_narrow(&dic))) {
			break;
		}
}

通過while循環,GCD每次從root queue中取出一個queue item,並調用_dispatch_continuation_pop_inline 執行它,直到root queue中的item全部清空爲止。

我們來看一下queue item是如何被執行的,這裏的queue item,應該是一個dispatch queue:

static inline void
_dispatch_continuation_pop_inline(dispatch_object_t dou,
		dispatch_invoke_context_t dic, dispatch_invoke_flags_t flags,
		dispatch_queue_t dq)
{
	dispatch_pthread_root_queue_observer_hooks_t observer_hooks =
			_dispatch_get_pthread_root_queue_observer_hooks();
	if (observer_hooks) observer_hooks->queue_will_execute(dq);
	_dispatch_trace_continuation_pop(dq, dou);
	flags &= _DISPATCH_INVOKE_PROPAGATE_MASK;
	if (_dispatch_object_has_vtable(dou)) {  // 到這裏,我們提交的任務,才被queue執行。簡直是百轉千回啊!!!!
		dx_invoke(dou._do, dic, flags);
	} else {
		_dispatch_continuation_invoke_inline(dou, DISPATCH_NO_VOUCHER, flags);
	}
	if (observer_hooks) observer_hooks->queue_did_execute(dq);
}

這裏dx_invoke是一個宏,它會調用_dispatch_queue_invoke 方法, 結合調用堆棧,其最後會調用
_dispatch_queue_serial_drain

OK,上面就是dispatch async串行隊列的執行步驟,總結一下就是:
將work打包成dispatch_continuation_t, 然後將dq入隊到響應的root queue中,root queue中的線程池中的線程會被喚醒,執行線程函數_dispatch_worker_thread3,root queue會被傾倒,執行queue中的任務。

讓我們再看一下 dispatch_async 到並行隊列的情況:

這裏寫圖片描述

回到上面串行隊列和並行隊列分支的地方:

static inline void
_dispatch_continuation_async2(dispatch_queue_t dq, dispatch_continuation_t dc,
		bool barrier)
{
	// 如果是用barrier插進來的任務或者是串行隊列,直接將任務加入到隊列
	// #define DISPATCH_QUEUE_USES_REDIRECTION(width) \
	//    ({ uint16_t _width = (width); \
	//    _width > 1 && _width < DISPATCH_QUEUE_WIDTH_POOL; })
	if (fastpath(barrier || !DISPATCH_QUEUE_USES_REDIRECTION(dq->dq_width)))
		return _dispatch_continuation_push(dq, dc); // 入隊
	}
	return _dispatch_async_f2(dq, dc); // 並行隊列走這裏
}

並行隊列會走_dispatch_async_f2

static void
_dispatch_async_f2(dispatch_queue_t dq, dispatch_continuation_t dc)
{
	// <rdar://problem/24738102&24743140> reserving non barrier width
	// doesn't fail if only the ENQUEUED bit is set (unlike its barrier width
	// equivalent), so we have to check that this thread hasn't enqueued
	// anything ahead of this call or we can break ordering
	if (slowpath(dq->dq_items_tail)) {
		return _dispatch_continuation_push(dq, dc);
	}

	if (slowpath(!_dispatch_queue_try_acquire_async(dq))) {
		return _dispatch_continuation_push(dq, dc);
	}

	// async 重定向,任務的執行由自動定義queue轉入root queue
	return _dispatch_async_f_redirect(dq, dc,
			_dispatch_continuation_override_qos(dq, dc));
}

這裏,會調用_dispatch_async_f_redirect,同樣的,會將dq重定向到root queue中。

static void
_dispatch_async_f_redirect(dispatch_queue_t dq,
		dispatch_object_t dou, dispatch_qos_t qos)
{
	if (!slowpath(_dispatch_object_is_redirection(dou))) {
		dou._dc = _dispatch_async_redirect_wrap(dq, dou);
	}
	
	// 將dq替換爲root queue
	dq = dq->do_targetq;

	// 這裏一般不會進入,主要是將dq替換爲最終的targetq
	while (slowpath(DISPATCH_QUEUE_USES_REDIRECTION(dq->dq_width))) {
		if (!fastpath(_dispatch_queue_try_acquire_async(dq))) {
			break;
		}
		if (!dou._dc->dc_ctxt) {
			// find first queue in descending target queue order that has
			// an autorelease frequency set, and use that as the frequency for
			// this continuation.
			dou._dc->dc_ctxt = (void *)
					(uintptr_t)_dispatch_queue_autorelease_frequency(dq);
		}
		dq = dq->do_targetq;
	}

	// 任務入隊,展開宏定義:
	// #define dx_push(x, y, z) dx_vtable(x)->do_push(x, y, z)
	// #define dx_vtable(x) (&(x)->do_vtable->_os_obj_vtable)
	// 由於此時的x實質上是root queue,可以查看init.c 中的
	// do_push 實質會調用 _dispatch_root_queue_push
	dx_push(dq, dou, qos);
}

這裏又出現了dx_push,查看init.c,實質會調用_dispatch_root_queue_push

void
_dispatch_root_queue_push(dispatch_queue_t rq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
#if DISPATCH_USE_KEVENT_WORKQUEUE
	dispatch_deferred_items_t ddi = _dispatch_deferred_items_get();
	if (unlikely(ddi && ddi->ddi_can_stash)) {
		dispatch_object_t old_dou = ddi->ddi_stashed_dou;
		dispatch_priority_t rq_overcommit;
		rq_overcommit = rq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;

		if (likely(!old_dou._do || rq_overcommit)) {
			dispatch_queue_t old_rq = ddi->ddi_stashed_rq;
			dispatch_qos_t old_qos = ddi->ddi_stashed_qos;
			ddi->ddi_stashed_rq = rq;
			ddi->ddi_stashed_dou = dou;
			ddi->ddi_stashed_qos = qos;
			_dispatch_debug("deferring item %p, rq %p, qos %d",
					dou._do, rq, qos);
			if (rq_overcommit) {
				ddi->ddi_can_stash = false;
			}
			if (likely(!old_dou._do)) {
				return;
			}
			// push the previously stashed item
			qos = old_qos;
			rq = old_rq;
			dou = old_dou;
		}
	}
#endif
#if HAVE_PTHREAD_WORKQUEUE_QOS
	if (_dispatch_root_queue_push_needs_override(rq, qos)) { // 判斷root queue的優先級和 自定義優先級是否相等,不相等,進入if(一般不相等)
		return _dispatch_root_queue_push_override(rq, dou, qos);
	}
#else
	(void)qos;
#endif
	_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}

會調用_dispatch_root_queue_push_override

static void
_dispatch_root_queue_push_override(dispatch_queue_t orig_rq,
		dispatch_object_t dou, dispatch_qos_t qos)
{
	bool overcommit = orig_rq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
	dispatch_queue_t rq = _dispatch_get_root_queue(qos, overcommit); // 根據優先級,獲取root queue
	dispatch_continuation_t dc = dou._dc;

	if (_dispatch_object_is_redirection(dc)) {
		// no double-wrap is needed, _dispatch_async_redirect_invoke will do
		// the right thing
		dc->dc_func = (void *)orig_rq;
	} else {
		dc = _dispatch_continuation_alloc();
		dc->do_vtable = DC_VTABLE(OVERRIDE_OWNING);
		// fake that we queued `dou` on `orig_rq` for introspection purposes
		_dispatch_trace_continuation_push(orig_rq, dou);
		dc->dc_ctxt = dc;
		dc->dc_other = orig_rq;
		dc->dc_data = dou._do;
		dc->dc_priority = DISPATCH_NO_PRIORITY;
		dc->dc_voucher = DISPATCH_NO_VOUCHER;
	}
	_dispatch_root_queue_push_inline(rq, dc, dc, 1); // 又會調用這裏
}
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_t dq, dispatch_object_t _head,
		dispatch_object_t _tail, int n)
{
	struct dispatch_object_s *head = _head._do, *tail = _tail._do;
	// 當queue爲空,且需要設置header時,會進入到這裏。這裏應該是第一次使用root queue的時候會進入一次
	if (unlikely(_dispatch_queue_push_update_tail_list(dq, head, tail))) { // 嘗試更新dq的list,如果是第一次,list 爲空,返回false
		_dispatch_queue_push_update_head(dq, head); // 設置queue 頭
		return _dispatch_global_queue_poke(dq, n, 0); // 這裏激活root queue,這裏的n是入隊dq個數,是1
	}
}

上面的判斷很重要:

if (unlikely(_dispatch_queue_push_update_tail_list(dq, head, tail))) { // 嘗試更新dq的list,如果是第一次,list 爲空,返回false
		_dispatch_queue_push_update_head(dq, head); // 設置queue 頭
		return _dispatch_global_queue_poke(dq, n, 0); // 這裏激活root queue,這裏的n是入隊dq個數,是1
	}

在執行新的任務時,GCD會嘗試更新root queue的任務列表。如果是第一次向root queue投遞任務,則此時的任務列表是空,更新任務列表失敗,則會進入_dispatch_global_queue_poke 來激活root queue:

void
_dispatch_global_queue_poke(dispatch_queue_t dq, int n, int floor)
{
	if (!_dispatch_queue_class_probe(dq)) {  // 如果還有要執行的,直接返回
		return;
	}
#if DISPATCH_USE_WORKQUEUES
	dispatch_root_queue_context_t qc = dq->do_ctxt;
	if (
#if DISPATCH_USE_PTHREAD_POOL
			(qc->dgq_kworkqueue != (void*)(~0ul)) &&
#endif
			!os_atomic_cmpxchg2o(qc, dgq_pending, 0, n, relaxed)) {
		_dispatch_root_queue_debug("worker thread request still pending for "
				"global queue: %p", dq);
		return;
	}
#endif // DISPATCH_USE_WORKQUEUES
	return _dispatch_global_queue_poke_slow(dq, n, floor);
}

poke 會進入第二階段 _dispatch_global_queue_poke_slow

static void
_dispatch_global_queue_poke_slow(dispatch_queue_t dq, int n, int floor)
{
	dispatch_root_queue_context_t qc = dq->do_ctxt;
	int remaining = n;  // remaining 表示要執行的任務數量 1
	int r = ENOSYS;

	// step1. 先初始化root queues 包括初始化XUN 的workqueue
	_dispatch_root_queues_init();
	_dispatch_debug_root_queue(dq, __func__);
#if DISPATCH_USE_WORKQUEUES
#if DISPATCH_USE_PTHREAD_POOL
	if (qc->dgq_kworkqueue != (void*)(~0ul))
#endif
	{
		_dispatch_root_queue_debug("requesting new worker thread for global "
				"queue: %p", dq);
#if DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK
		if (qc->dgq_kworkqueue) {
			pthread_workitem_handle_t wh;
			unsigned int gen_cnt;
			do {
				// 調用XUN內核的workqueue函數,來維護GCD層的 pthread pool
				r = pthread_workqueue_additem_np(qc->dgq_kworkqueue,
						_dispatch_worker_thread4, dq, &wh, &gen_cnt);
				(void)dispatch_assume_zero(r);
			} while (--remaining);
			return;
		}
#endif // DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK
#if HAVE_PTHREAD_WORKQUEUE_QOS
		r = _pthread_workqueue_addthreads(remaining,
				_dispatch_priority_to_pp(dq->dq_priority));
#elif DISPATCH_USE_PTHREAD_WORKQUEUE_SETDISPATCH_NP
		r = pthread_workqueue_addthreads_np(qc->dgq_wq_priority,
				qc->dgq_wq_options, remaining);
#endif
		(void)dispatch_assume_zero(r);
		return;
	}
#endif // DISPATCH_USE_WORKQUEUES
#if DISPATCH_USE_PTHREAD_POOL
	dispatch_pthread_root_queue_context_t pqc = qc->dgq_ctxt;
	if (fastpath(pqc->dpq_thread_mediator.do_vtable)) {
		while (dispatch_semaphore_signal(&pqc->dpq_thread_mediator)) { // step2. 喚醒線程,來做事情
			_dispatch_root_queue_debug("signaled sleeping worker for "
					"global queue: %p", dq);
			if (!--remaining) { // 如果沒有要處理的dq了,返回
				return;
			}
		}
	}
}

說實話,這裏看的不是很清楚,重點是step1, 初始化root queue的方法:_dispatch_root_queues_init

void
_dispatch_root_queues_init(void)
{
	// 這裏用了dispatch_once_f, 僅會執行一次
	static dispatch_once_t _dispatch_root_queues_pred;
	dispatch_once_f(&_dispatch_root_queues_pred, NULL,
			_dispatch_root_queues_init_once);
}
static void
_dispatch_root_queues_init_once(void *context DISPATCH_UNUSED)
{
	int wq_supported;
	_dispatch_fork_becomes_unsafe();
	if (!_dispatch_root_queues_init_workq(&wq_supported)) {
#if DISPATCH_ENABLE_THREAD_POOL
		size_t i;
		for (i = 0; i < DISPATCH_ROOT_QUEUE_COUNT; i++) {
			bool overcommit = true;
#if TARGET_OS_EMBEDDED || (DISPATCH_USE_INTERNAL_WORKQUEUE && HAVE_DISPATCH_WORKQ_MONITORING)
			// some software hangs if the non-overcommitting queues do not
			// overcommit when threads block. Someday, this behavior should
			// apply to all platforms
			if (!(i & 1)) {
				overcommit = false;
			}
#endif
			_dispatch_root_queue_init_pthread_pool(
					&_dispatch_root_queue_contexts[i], 0, overcommit);
		}
#else
		DISPATCH_INTERNAL_CRASH((errno << 16) | wq_supported,
				"Root queue initialization failed");
#endif // DISPATCH_ENABLE_THREAD_POOL
	}
}

這個初始化函數有兩個分支,GCD首先會調用_dispatch_root_queues_init_workq 來初始化root queues,如果不成功,才使用_dispatch_root_queue_init_pthread_pool 。 這裏可以看出,GCD是優先使用XUN內核提供的workqueue,而非使用用戶層的線程池。 我們這裏重點關注GCD使用workqueue的情況:

static inline bool
_dispatch_root_queues_init_workq(int *wq_supported)
{
	int r; (void)r;
	bool result = false;
	*wq_supported = 0;
#if DISPATCH_USE_WORKQUEUES
	bool disable_wq = false; (void)disable_wq;
#if DISPATCH_ENABLE_THREAD_POOL && DISPATCH_DEBUG
	disable_wq = slowpath(getenv("LIBDISPATCH_DISABLE_KWQ"));
#endif
#if DISPATCH_USE_KEVENT_WORKQUEUE || HAVE_PTHREAD_WORKQUEUE_QOS
	bool disable_qos = false;
#if DISPATCH_DEBUG
	disable_qos = slowpath(getenv("LIBDISPATCH_DISABLE_QOS"));
#endif
#if DISPATCH_USE_KEVENT_WORKQUEUE
	bool disable_kevent_wq = false;
#if DISPATCH_DEBUG || DISPATCH_PROFILE
	disable_kevent_wq = slowpath(getenv("LIBDISPATCH_DISABLE_KEVENT_WQ"));
#endif
#endif

	if (!disable_wq && !disable_qos) {
		*wq_supported = _pthread_workqueue_supported();
#if DISPATCH_USE_KEVENT_WORKQUEUE
		if (!disable_kevent_wq && (*wq_supported & WORKQ_FEATURE_KEVENT)) {
			r = _pthread_workqueue_init_with_kevent(_dispatch_worker_thread3,
					(pthread_workqueue_function_kevent_t)
					_dispatch_kevent_worker_thread,
					offsetof(struct dispatch_queue_s, dq_serialnum), 0);
#if DISPATCH_USE_MGR_THREAD
			_dispatch_kevent_workqueue_enabled = !r;
#endif
			result = !r;
		} else
#endif // DISPATCH_USE_KEVENT_WORKQUEUE
		if (*wq_supported & WORKQ_FEATURE_FINEPRIO) {
#if DISPATCH_USE_MGR_THREAD
			r = _pthread_workqueue_init(_dispatch_worker_thread3,
					offsetof(struct dispatch_queue_s, dq_serialnum), 0);
			result = !r;
#endif
		}
		if (!(*wq_supported & WORKQ_FEATURE_MAINTENANCE)) {
			DISPATCH_INTERNAL_CRASH(*wq_supported,
					"QoS Maintenance support required");
		}
	}
#endif // DISPATCH_USE_KEVENT_WORKQUEUE || HAVE_PTHREAD_WORKQUEUE_QOS
#if DISPATCH_USE_PTHREAD_WORKQUEUE_SETDISPATCH_NP
	if (!result && !disable_wq) {
		pthread_workqueue_setdispatchoffset_np(
				offsetof(struct dispatch_queue_s, dq_serialnum));
		r = pthread_workqueue_setdispatch_np(_dispatch_worker_thread2);
#if !DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK
		(void)dispatch_assume_zero(r);
#endif
		result = !r;
	}
#endif // DISPATCH_USE_PTHREAD_WORKQUEUE_SETDISPATCH_NP
#if DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK || DISPATCH_USE_PTHREAD_POOL
	if (!result) {
#if DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK
		pthread_workqueue_attr_t pwq_attr;
		if (!disable_wq) {
			r = pthread_workqueue_attr_init_np(&pwq_attr);
			(void)dispatch_assume_zero(r);
		}
#endif
		size_t i;
		for (i = 0; i < DISPATCH_ROOT_QUEUE_COUNT; i++) {
			pthread_workqueue_t pwq = NULL;
			dispatch_root_queue_context_t qc;
			qc = &_dispatch_root_queue_contexts[i];
#if DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK
			if (!disable_wq && qc->dgq_wq_priority != WORKQ_PRIO_INVALID) {
				r = pthread_workqueue_attr_setqueuepriority_np(&pwq_attr,
						qc->dgq_wq_priority);
				(void)dispatch_assume_zero(r);
				r = pthread_workqueue_attr_setovercommit_np(&pwq_attr,
						qc->dgq_wq_options &
						WORKQ_ADDTHREADS_OPTION_OVERCOMMIT);
				(void)dispatch_assume_zero(r);
				r = pthread_workqueue_create_np(&pwq, &pwq_attr);
				(void)dispatch_assume_zero(r);
				result = result || dispatch_assume(pwq);
			}
#endif // DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK
			if (pwq) {
				qc->dgq_kworkqueue = pwq;
			} else {
				qc->dgq_kworkqueue = (void*)(~0ul);
				// because the fastpath of _dispatch_global_queue_poke didn't
				// know yet that we're using the internal pool implementation
				// we have to undo its setting of dgq_pending
				qc->dgq_pending = 0;
			}
		}
#if DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK
		if (!disable_wq) {
			r = pthread_workqueue_attr_destroy_np(&pwq_attr);
			(void)dispatch_assume_zero(r);
		}
#endif
	}
#endif // DISPATCH_USE_LEGACY_WORKQUEUE_FALLBACK || DISPATCH_ENABLE_THREAD_POOL
#endif // DISPATCH_USE_WORKQUEUES
	return result;
}

又是一堆代碼,網上對於workqueue的資料比較少,筆者也沒有深入的研究。但可以關注下上面代碼中workqueue的初始化函數:

_pthread_workqueue_init_with_kevent(_dispatch_worker_thread3,
					(pthread_workqueue_function_kevent_t)
					_dispatch_kevent_worker_thread,
					offsetof(struct dispatch_queue_s, dq_serialnum), 0);

workqueue指定了_dispatch_worker_thread3 作爲root quue的工作方法:

static void
_dispatch_worker_thread3(pthread_priority_t pp)
{
	bool overcommit = pp & _PTHREAD_PRIORITY_OVERCOMMIT_FLAG;
	dispatch_queue_t dq;
	pp &= _PTHREAD_PRIORITY_OVERCOMMIT_FLAG | ~_PTHREAD_PRIORITY_FLAGS_MASK;
	_dispatch_thread_setspecific(dispatch_priority_key, (void *)(uintptr_t)pp);
	dq = _dispatch_get_root_queue(_dispatch_qos_from_pp(pp), overcommit); // 根據thread pripority和是否overcommit,取出root queue數組中對應的root queue
	return _dispatch_worker_thread4(dq); // 最終會調用它
}

static void
_dispatch_worker_thread4(void *context)
{
	dispatch_queue_t dq = context;
	dispatch_root_queue_context_t qc = dq->do_ctxt;

	_dispatch_introspection_thread_add();
	int pending = os_atomic_dec2o(qc, dgq_pending, relaxed);
	dispatch_assert(pending >= 0);
	_dispatch_root_queue_drain(dq, _dispatch_get_priority()); // 將root queue的所有任務都drain(傾倒),並執行
	_dispatch_voucher_debug("root queue clear", NULL);
	_dispatch_reset_voucher(NULL, DISPATCH_THREAD_PARK);
}

上面就是GCD dispatch_async並行隊列的實現方法。說實話,理解並不是很清楚,只能從大體上感受一下:
並行隊列首先會被替換爲對應的root queue,將自定義dq入隊。如果是第一次入隊,則會去激活所有的root queue。所謂激活,主要是創建XUN 內核支持的workqueue(升級版線程池,會自動判斷是否需要創建新的線程),同時,將workqueue的工作函數設置爲_dispatch_worker_thread3_dispatch_worker_thread3 的注意方法是會調用_dispatch_root_queue_drain,將root queue進行清空,它會清空所有提交到當前root queue中的dq,並執行它們的dq任務。

dispatch_once

我們平常在寫單例模式的時候,總會用到GCD的dispatch_once 函數。那麼,它是怎麼實現的呢?

   static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        NSLog(@"This will only run once");
    });

dispatch_once_t onceToken是一個typedef,實際上是一個long值:

typedef long dispatch_once_t;

dispatch_once是一個宏定義:

#define dispatch_once _dispatch_once

我們查看_dispatch_once 方法的定義:

void
_dispatch_once(dispatch_once_t *predicate,
		DISPATCH_NOESCAPE dispatch_block_t block)
{
	if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) {  // 如果是第一次進入,走這裏。或者在設置*predicate = ~0l之前,其他線程再調用_dispatch_once,還會走這裏
		dispatch_once(predicate, block); // 這裏執行完畢之前,如果_dispatch_once再次由其他線程調用,仍會進入這裏
	}
	DISPATCH_COMPILER_CAN_ASSUME(*predicate == ~0l); // 設置*predicate == ~0l,防止在進入
}

這裏可以發現,當單例方法執行完畢,GCD會將onceToken置爲~0。如果再次調用單例方法,GCD會發現onceToken已經使用過,就直接返回了。但是仍有一種可能,就是在onceToken被設置爲~0之前,其他線程再次進入了_dispatch_once方法,這就會導致dispatch_once 被多次調用。

我們就來接着看一下,在dispatch_once中,是如何防止這種情況發生的:

void
dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
	dispatch_once_f(val, block, _dispatch_Block_invoke(block));
}

void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
#if !DISPATCH_ONCE_INLINE_FASTPATH
	if (likely(os_atomic_load(val, acquire) == DLOCK_ONCE_DONE)) { // 這裏來判斷是否已經執行了一次(用原子操作load val的值,如果執行過了 val == ~0)
		return;
	}
#endif // !DISPATCH_ONCE_INLINE_FASTPATH
	return dispatch_once_f_slow(val, ctxt, func); 
}

static void
dispatch_once_f_slow(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
	_dispatch_once_waiter_t volatile *vval = (_dispatch_once_waiter_t*)val; // 明明是一個long指針,硬被轉換爲了_dispatch_once_waiter_t *,無所謂,都是地址而已
	struct _dispatch_once_waiter_s dow = { };
	_dispatch_once_waiter_t tail = &dow, next, tmp;
	dispatch_thread_event_t event;

	if (os_atomic_cmpxchg(vval, NULL, tail, acquire)) { // *vval 是否等於NULL? 是,則返回true,並將*vval置爲tail。如果不是,返回false(第一次進入,*vval == NULL, 之後又其他線程進入,則進入else分支) 如果之後在沒有其他線程進入,則val的值一直會保持tail
		dow.dow_thread = _dispatch_tid_self();// 當前線程的 thread port
		_dispatch_client_callout(ctxt, func); // 執行client 代碼,也就是我們單例初始化方法。 注意,如果在client 代碼中嵌套調用同一個once token的dispatch once方法時,再次會進入else分支,導致當前的thread被_dispatch_thread_event_wait阻塞,而無法執行下面的_dispatch_thread_event_signal,導致死鎖

		next = (_dispatch_once_waiter_t)_dispatch_once_xchg_done(val);  // 調用原子操作 atomic_exchange_explicit(val, DLOCK_ONCE_DONE, memory_order_release);  將val置爲DLOCK_ONCE_DONE,同時返回val的之前值賦值給next
		while (next != tail) { // 如果next 不爲tail, 說明val的值被別的線程修改了。也就是說同一時間,有其他線程試圖執行單例方法,這會導致其他線程做信號量等待,所以下面要signal其他線程
			tmp = (_dispatch_once_waiter_t)_dispatch_wait_until(next->dow_next);
			event = &next->dow_event;
			next = tmp;
			_dispatch_thread_event_signal(event);
		}
	} else { // 其他後進入的線程會走這裏(會被阻塞住,直到第一個線程執行完畢,纔會被喚醒)
		_dispatch_thread_event_init(&dow.dow_event);
		next = *vval; // 保留之前的值
		for (;;) {
			if (next == DISPATCH_ONCE_DONE) {
				break;
			}
			if (os_atomic_cmpxchgv(vval, next, tail, &next, release)) { // *vval 是否等於next?相等,返回true,同時設置*vval = tail。不相等,返回false,同時設置*vval = next. 所有線程第一次進入這裏,應該是相等的
			    // 這裏的dow = *tail = *vval
			    // 因此下面兩行代碼可以理解爲:
			    // (*vval)->dow_thread = next->dow_thread
			    // (*vval)->dow_next = next;
				dow.dow_thread = next->dow_thread; // 這裏的dow = *tail = **vval
				dow.dow_next = next;
				if (dow.dow_thread) {
					pthread_priority_t pp = _dispatch_get_priority();
					_dispatch_thread_override_start(dow.dow_thread, pp, val);
				}
				_dispatch_thread_event_wait(&dow.dow_event); // 線程在這裏休眠,直到單例方法執行完畢後,被喚醒
				if (dow.dow_thread) {
					_dispatch_thread_override_end(dow.dow_thread, val);
				}
				break;
			}
		}
		_dispatch_thread_event_destroy(&dow.dow_event);
	}
}

上面的代碼註釋應該說明了在dispatch_once_f_slow中是如何保證代碼僅執行一次,同時防止在其他線程多次進入的情況。可以看到,其避免多線程的方法主要是用了C++11中的原子操作。這說一下dispatch_once_f_slow 實現的大致思路:

  1. 如果是線程第一次進入dispatch_once_f_slow方法,此時*vval == NULL, 進入if分支,執行單例方法。同時,通過原子操作os_atomic_cmpxchg , *vval 被設置爲tail,也就是 _dispatch_once_waiter_t,指向一個空的_dispatch_once_waiter_s 結構體。
  2. 如果其他線程再次進入dispatch_once_f_slow 方法,此時*vval != NULL, 走 else 分支。在else分支裏面主要是將與當前線程相關的_dispatch_once_waiter_t 頭插入vval列表中。然後,調用_dispatch_thread_event_wait(&dow.dow_event) 阻塞當前線程。也就是說,在單例方法調用完畢前,其他要訪問單例方法的線程都會被阻塞。
  3. 回到第一個線程的if分支中,當調用完單例方法後(_dispatch_client_callout(ctxt, func)),會將onceToken val設置爲DLOCK_ONCE_DONE,表明已經執行過一次單例方法。同時,會將val之前的值賦值給next。如果之前有其他線程進來過,根據步驟2中val的賦值,則val肯定不等於初始的tail值,而在val的鏈表中,保存了所有代表其他線程的_dispatch_once_waiter_t ,這時候就會遍歷這個鏈表,一次喚醒其他線程,直到next等於初始的tail爲止。

dispatch_once的死鎖問題

閱讀了上面的源碼後,會發現,dispatch_once將後進入的線程阻塞。這本是爲了防止多線程併發的問題,但是也留下了一個死鎖的隱患。如果在dispatch_once仍在執行時,同一線程再次調用dispatch_once方法,則會死鎖。其實這本是一個遞歸循環調用的問題,但是由於線程阻塞的存在,就不會遞歸,而成了死鎖:

- (void)viewDidLoad {
    [super viewDidLoad];
    
    [self once];
}

- (void)once {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        [self otherOnce];
    });
    NSLog(@"遇到第一隻熊貓寶寶...");
}

- (void)otherOnce {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        [self once];
    });
    NSLog(@"遇到第二隻熊貓寶寶...");
}

但是,在最新的XCode中會crash:也就是說GCD的內部實現已經更改了。
這裏寫圖片描述

dispatch_group

當我們需要等待一批任務執行完畢時,我們可以用dispatch_group

	dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t myQueue = dispatch_queue_create("com.example.MyQueue", DISPATCH_QUEUE_CONCURRENT);
    dispatch_queue_t finishQueue = dispatch_queue_create("com.example.finishQueue", NULL);
 
    dispatch_group_async(group, myQueue, ^{NSLog(@"Task 1");});
    dispatch_group_async(group, myQueue, ^{NSLog(@"Task 2");});
    dispatch_group_async(group, myQueue, ^{NSLog(@"Task 3");});
    
    dispatch_group_notify(group, finishQueue, ^{
        NSLog(@"All Done!");
	}

我們依次來看一下調用函數的源碼:

dispatch_group_create

dispatch_group_t group = dispatch_group_create();
dispatch_group_t
dispatch_group_create(void)
{
	return _dispatch_group_create_with_count(0);
}

static inline dispatch_group_t
_dispatch_group_create_with_count(long count)
{
	// 創建一個dg
	dispatch_group_t dg = (dispatch_group_t)_dispatch_object_alloc(
			DISPATCH_VTABLE(group), sizeof(struct dispatch_group_s));
	_dispatch_semaphore_class_init(count, dg); // 初始化dg。這裏count 默認是0
	if (count) {
		os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // dg 引用計數加一
	}
	return dg;
}

創建dg的時候,會創建一個dispatch_group_t 類型實例並返回,這就是我們使用的group類型,我們看一下他的定義:

struct dispatch_group_s {
	DISPATCH_SEMAPHORE_HEADER(group, dg);
	int volatile dg_waiters;
	struct dispatch_continuation_s *volatile dg_notify_head;
	struct dispatch_continuation_s *volatile dg_notify_tail;
};

DISPATCH_SEMAPHORE_HEADER是一個宏定義,表明dispatch_group_s也可以看做是一個信號量對象。

#define DISPATCH_SEMAPHORE_HEADER(cls, ns) \
	DISPATCH_OBJECT_HEADER(cls); \
	long volatile ns##_value; \
	_dispatch_sema4_t ns##_sema

展開dispatch_group_s :

struct dispatch_group_s {
	DISPATCH_OBJECT_HEADER(group);
	long volatile dg_value; // 這裏來記錄有幾個group任務
	_dispatch_sema4_t dg_sema;
	int volatile dg_waiters;
	struct dispatch_continuation_s *volatile dg_notify_head;
	struct dispatch_continuation_s *volatile dg_notify_tail;
};

dispatch_group_async

創建了group後,就可以將任務提交到group了:

dispatch_group_async(group, myQueue, ^{NSLog(@"Task 1");});

void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_block_t db)
{
	// 將db封裝爲dispatch_continuation_t,這裏和dispatch_async一樣的
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DISPATCH_OBJ_CONSUME_BIT | DISPATCH_OBJ_GROUP_BIT; // 相比普通的async,這裏dc_flags置位了DISPATCH_OBJ_GROUP_BIT
	_dispatch_continuation_init(dc, dq, db, 0, 0, dc_flags);
	// 會調用這裏
	_dispatch_continuation_group_async(dg, dq, dc);
}
static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_continuation_t dc)
{
	dispatch_group_enter(dg); // 這裏將dg->dg_value +1
	dc->dc_data = dg; // 將dg 存儲到dc中
	_dispatch_continuation_async(dq, dc); // 這裏和dispatch_async是一樣的調用
}

可以看到,dispatch_group_asyncdispatch_async,最終都會調用到_dispatch_continuation_async。這裏和dispatch_async 都是一樣的邏輯,直到dc被invoke時:

static inline void
_dispatch_continuation_invoke_inline(dispatch_object_t dou, voucher_t ov,
		dispatch_invoke_flags_t flags)
{
	dispatch_continuation_t dc = dou._dc, dc1;
	dispatch_invoke_with_autoreleasepool(flags, {
		uintptr_t dc_flags = dc->dc_flags;
		// Add the item back to the cache before calling the function. This
		// allows the 'hot' continuation to be used for a quick callback.
		//
		// The ccache version is per-thread.
		// Therefore, the object has not been reused yet.
		// This generates better assembly.
		_dispatch_continuation_voucher_adopt(dc, ov, dc_flags);
		if (dc_flags & DISPATCH_OBJ_CONSUME_BIT) {
			dc1 = _dispatch_continuation_free_cacheonly(dc);
		} else {
			dc1 = NULL;
		}
		if (unlikely(dc_flags & DISPATCH_OBJ_GROUP_BIT)) { // group會走這裏
			_dispatch_continuation_with_group_invoke(dc);
		} else {
			_dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
			_dispatch_introspection_queue_item_complete(dou);
		}
		if (unlikely(dc1)) {
			_dispatch_continuation_free_to_cache_limit(dc1);
		}
	});
	_dispatch_perfmon_workitem_inc();
}
static inline void
_dispatch_continuation_with_group_invoke(dispatch_continuation_t dc)
{
	struct dispatch_object_s *dou = dc->dc_data; // 這裏的dou是dispatch_group_s類型
	unsigned long type = dx_type(dou);
	if (type == DISPATCH_GROUP_TYPE) {
		_dispatch_client_callout(dc->dc_ctxt, dc->dc_func);  // 調用client回調
		_dispatch_introspection_queue_item_complete(dou);
		dispatch_group_leave((dispatch_group_t)dou); // group 任務執行完,leave group
	} else {
		DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type");
	}
}
void
dispatch_group_leave(dispatch_group_t dg)
{
	long value = os_atomic_dec2o(dg, dg_value, release); // 這裏將dg->dg_value -1,對應dispatch_group_enter的+1 並將新值返回給value
	if (slowpath(value == 0)) {    // 如果group中所有的任務都已經完成,則調用_dispatch_group_wake
		return (void)_dispatch_group_wake(dg, true);
	}
	if (slowpath(value < 0)) { // 這裏說明dispatch_group_enter / dispatch_group_leave 必須成對調用, 否則會crash
		DISPATCH_CLIENT_CRASH(value,
				"Unbalanced call to dispatch_group_leave()");
	}
}

我們看到,當group中所有的任務都已經完成時,會調用_dispatch_group_wake :

static long
_dispatch_group_wake(dispatch_group_t dg, bool needs_release)
{
	dispatch_continuation_t next, head, tail = NULL;
	long rval;

	// cannot use os_mpsc_capture_snapshot() because we can have concurrent
	// _dispatch_group_wake() calls
	head = os_atomic_xchg2o(dg, dg_notify_head, NULL, relaxed);
	if (head) {
		// snapshot before anything is notified/woken <rdar://problem/8554546>
		tail = os_atomic_xchg2o(dg, dg_notify_tail, NULL, release);
	}
	rval = (long)os_atomic_xchg2o(dg, dg_waiters, 0, relaxed);
	if (rval) { // 如果有group等待,則喚醒線程
		// wake group waiters
		_dispatch_sema4_create(&dg->dg_sema, _DSEMA4_POLICY_FIFO);
		_dispatch_sema4_signal(&dg->dg_sema, rval);
	}
	uint16_t refs = needs_release ? 1 : 0; // <rdar://problem/22318411>
	if (head) {
		// async group notify blocks
		do { // 依次執行group 執行完畢時的回調block
			next = os_mpsc_pop_snapshot_head(head, tail, do_next);
			dispatch_queue_t dsn_queue = (dispatch_queue_t)head->dc_data;
			_dispatch_continuation_async(dsn_queue, head);
			_dispatch_release(dsn_queue);
		} while ((head = next));
		refs++;
	}
	if (refs) _dispatch_release_n(dg, refs); // 釋放group
	return 0;
}

上面的代碼,當group中的任務全部執行完畢時,會調用_dispatch_group_wake,裏面又會調用_dispatch_continuation_async(dsn_queue, head), 將group finish 時的block再次入隊調用。

我們等待group結束有兩種方法:
異步方法:dispatch_group_notify , 同步方法:dispatch_group_wait

我們分別來看一下他們的實現:

void
dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_block_t db)
{
	// 打包db
	dispatch_continuation_t dsn = _dispatch_continuation_alloc();
	_dispatch_continuation_init(dsn, dq, db, 0, 0, DISPATCH_OBJ_CONSUME_BIT);
	// 內部會調用私有函數 _dispatch_group_notify
	_dispatch_group_notify(dg, dq, dsn);
}

static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_continuation_t dsn)
{
	dsn->dc_data = dq; // 將notifiy的queue放入dsn的dc_data中,用於group任務all finish時,在指定的queue中調用group finish block
	dsn->do_next = NULL;
	_dispatch_retain(dq);
	if (os_mpsc_push_update_tail(dg, dg_notify, dsn, do_next)) { // 將當前的notify 存入到dg_notify_tail隊列中,用於finish時回調group
		_dispatch_retain(dg);
		os_atomic_store2o(dg, dg_notify_head, dsn, ordered);
		// seq_cst with atomic store to notify_head <rdar://problem/11750916>
		if (os_atomic_load2o(dg, dg_value, ordered) == 0) {
			_dispatch_group_wake(dg, false);
		}
	}
}

以上代碼很簡單,由於是異步執行,不需要等待,因此只需要將finish block入隊到dispatch group中,等待group任務全部執行完畢在依次調用finish block即可。

下面再看一下同步group的實現:

long
dispatch_group_wait(dispatch_group_t dg, dispatch_time_t timeout)
{
	if (dg->dg_value == 0) { // 如果當前group沒有任何任務,直接返回
		return 0;
	}
	if (timeout == 0) { // 如果timeout == 0, 直接返回
		return _DSEMA4_TIMEOUT();
	}
	return _dispatch_group_wait_slow(dg, timeout);
}

static long
_dispatch_group_wait_slow(dispatch_group_t dg, dispatch_time_t timeout)
{
	long value;
	int orig_waiters;

	// check before we cause another signal to be sent by incrementing
	// dg->dg_waiters
	// 在wait前,先看有沒有任務在,沒有,直接wake dg
	value = os_atomic_load2o(dg, dg_value, ordered); // 19296565
	if (value == 0) {
		return _dispatch_group_wake(dg, false);
	}

	// 在group的dg_waiters中添加一個waiter計數
	(void)os_atomic_inc2o(dg, dg_waiters, relaxed);
	// check the values again in case we need to wake any threads
	value = os_atomic_load2o(dg, dg_value, ordered); // 19296565
	if (value == 0) {
		_dispatch_group_wake(dg, false);
		// Fall through to consume the extra signal, forcing timeout to avoid
		// useless setups as it won't block
		timeout = DISPATCH_TIME_FOREVER;
	}
	// 創建信號量,準備等待
	_dispatch_sema4_create(&dg->dg_sema, _DSEMA4_POLICY_FIFO);
	// 根據time out的值,有不同的等待策略
	switch (timeout) {
	default:
		if (!_dispatch_sema4_timedwait(&dg->dg_sema, timeout)) { // 默認指定等待到timeout
			break;
		}
		// Fall through and try to undo the earlier change to
		// dg->dg_waiters
	case DISPATCH_TIME_NOW: // 如果timeout ==0, 即不等待
		orig_waiters = dg->dg_waiters;
		while (orig_waiters) { // waiter 數量-1, 返回等待超時
			if (os_atomic_cmpxchgvw2o(dg, dg_waiters, orig_waiters,
					orig_waiters - 1, &orig_waiters, relaxed)) {
				return _DSEMA4_TIMEOUT();
			}
		}
		// Another thread is running _dispatch_group_wake()
		// Fall through and drain the wakeup.
	case DISPATCH_TIME_FOREVER: // 一直等
		_dispatch_sema4_wait(&dg->dg_sema);
		break;
	}
	return 0;
}

dispatch_group_wait的實現也很簡單,在底層用到了信號量,同時,會在group的dg_waiters中計數加一。

參考資料

淺談iOS多線程(源碼)

源碼

源代碼

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章