golang代碼筆記 --sync包

sync

sync包實現了一些基礎的同步原語;更高級的同步機制官方建議使用channel來實現;

同時包含atomic包,實現數據的原子操作;

以下原語對象在參數傳遞時,切忌不可被拷貝:XXX must not be copied after first use.

Mutex

鎖是sync包的核心概念,其他原語的實現到多都是基於Mutex的封裝,golang在Mutex之前抽象了Locker接口;

// A Locker represents an object that can be locked and unlocked.
type Locker interface {
	Lock()
	Unlock()
}

Mutex則是 Locker一種基本實現;

golang中Mutex是根據自旋鎖實現,並在此基礎上增加了優化策略來解決過度等待和飢餓問題;

自旋鎖的一種簡單表示(來自一個大佬)

for {atomic.CAS == true?break:continue}

自旋鎖的基本描述中是需要基於atomic操作來保證排他性,不停的進行CAS嘗試,成功也就表示鎖成功,CAS操作的對象的值在0/1之間不停切換;

Mutex定義

// A Mutex is a mutual exclusion lock.
// The zero value for a Mutex is an unlocked mutex.
//
// A Mutex must not be copied after first use.
type Mutex struct {
  // 狀態字段:/喚醒狀態/模式狀態/lock狀態
	state int32
  // goroutine阻塞喚醒狀態量?
	sema  uint32
}

相關概念

被鎖狀態(mutexLocked):鎖處於鎖住狀態;

喚醒狀態(mutexWoken狀態):Unlock操作後喚醒某個goroutine,併發狀態下,防止併發喚醒多個goroutine;

正常模式:等待隊列中的goroutine根據FIFO的順序依次競爭獲取鎖;此模式下,新加的goroutine在等待隊列隊列非空的情況下仍嘗試獲取鎖(4次自旋嘗試等待)獲取到鎖;

飢餓模式(mutexStarving狀態):

觸發條件是某個goroutine的等待時長超過1ms;

新加的goroutine也不會嘗試去獲取鎖,不自旋等待;

Unlock操作直接交給等待隊列的第一個goroutine;

這種模式是爲了保證公平性,保證在隊尾的goroutine也有可能獲取到鎖;

省略了部分併發檢查的邏輯

ss

在這裏插入圖片描述

這裏不僅僅是對Lock狀態的操作需要CAS,Mutex的所有狀態更新都要保證CAS,如果CAS失敗則要考慮Mutex狀態已經被其他goroutine更新,代碼中通過old = m.state來獲取最新的狀態

Lock

可對照源碼閱讀:src/sync/mutex.go

func (m *Mutex) Lock() {
		// 第一次嘗試獲取鎖,成功則直接退出
  	atomic.CompareAndSwapInt32
  
  	// 初始化阻塞策略的相關變量
  	// waitStartTime 獲取鎖所等待的時長
  	var waitStartTime int64
  	// starving 是否爲飢餓模式
  	starving := false
    // awoke 是否爲喚醒狀態
  	awoke := false
  	// iter 自旋次數
  	iter := 0
  	// old 當前狀態 = m.state
  	old := m.state
    for {
      if 滿足自旋條件(被鎖狀態&非飢餓模式&自旋次數不超過限制) {
        if 處於非 && CAS操作成功 {
          已進入喚醒狀態(當前goroutine搶佔成功)
        }
      }
      自旋一定時間
      自旋次數++
      old取最新值
      continue
    }
  	// new爲下一個狀態
  	new := old
    if 非飢餓模式 {
      new := mutexLocked(嘗試去獲取鎖)
    }
    if 當前狀態爲Locked或者飢餓模式 {
				new += 1 << mutexWaiterShift
    }
  	if starving && old&mutexLocked != 0 {
      // 下一個狀態進入飢餓模式
			new |= mutexStarving
		}
    if awoke {
      // 重置喚醒狀態標誌
			new &^= mutexWoken
    }
  	// state未被其他goroutine更新
  	if atomic.CompareAndSwapInt32(&m.state, old, new) {
      	// 通過old判斷是不是獲取鎖成功,成功就退出了
      	if old&(mutexLocked|mutexStarving) == 0 {
					break // locked the mutex with CAS
				}
        // 進入等待隊列,非第一次則放入到等待隊列首部(保證公平性)
        // 等待時長超過starvationThresholdNs(1ms)
        starving = starving || runtime_nanotime()-waitStartTime > starvationThresholdNs
      	// 更新當前狀態
      	old = m.state
        if 非飢餓模式 {
						delta := int32(mutexLocked - 1<<mutexWaiterShift)
          if !starving || old>>mutexWaiterShift == 1 {
            // Exit starvation mode.
            // Critical to do it here and consider wait time.
            // Starvation mode is so inefficient, that two goroutines
            // can go lock-step infinitely once they switch mutex
            // to starvation mode.
            delta -= mutexStarving
          }
        }
      	awoke = true
				iter = 0
    }
}

Unlock

func (m *Mutex) Unlock() {
    // 釋放鎖標誌
  	new := atomic.AddInt32(&m.state, -mutexLocked)
    // 重複Unlock檢查
  	if (new+mutexLocked)&mutexLocked == 0 {
      throw("sync: unlock of unlocked mutex")
    }
  	if 非飢餓模式 {
       // 檢查已喚醒其他goroutine
       // 隨便喚醒一個
       runtime_Semrelease(&m.sema, false)
    } else {
       // 喚醒阻塞隊列中的第一個
       runtime_Semrelease(&m.sema, true)
    }
}

總計一下,Mutex設計中幾個要點:

1.新加goroutine首次獲取鎖失敗放在隊首,之後Lock失敗則放入隊尾(新增的goroutine正在CPU執行,把鎖給他們會有很大的優勢);

2.任何一個goroutine嘗試獲取鎖的時長超過1ms,則進入飢餓模式;飢餓模式下,新加goroutine不會自旋等待,不會嘗試獲取鎖;Unlock之後換新隊首的goroutine;

RWMutex

讀寫鎖,基於Mutex實現,如果只是使用Lock/Unlock,和Mutex是等效的;

實現上主要依賴的是Mutex和一些狀態量,使得鎖可以被任意數量的Reader或者單個Writer獲取;

Lock:不僅要等待m.Lock(),還要判斷readerCount是不是0;

RLock:只需要判斷有沒有Writer在等待(readerCount爲特殊值);

這個很重要:只要有Writer被阻塞,新到的Reader也會被阻塞,直到Unlock;

// If a goroutine holds a RWMutex for reading and another goroutine might
// call Lock, no goroutine should expect to be able to acquire a read lock
// until the initial read lock is released. In particular, this prohibits
// recursive read locking. This is to ensure that the lock eventually becomes
// available; a blocked Lock call excludes new readers from acquiring the
// lock.
type RWMutex struct {
	w           Mutex  // held if there are pending writers
	writerSem   uint32 // semaphore for writers to wait for completing readers
	readerSem   uint32 // semaphore for readers to wait for completing writers
	readerCount int32  // number of pending readers
	readerWait  int32  // number of departing readers
}

// 示例
func Case9() {
	var rm sync.RWMutex
	rm.RLock()
	println("read locked 1.")
	go func() {
		rm.Lock()
		println("locked.")
	}()
	go func() {
		time.Sleep(time.Second)
		println("2 read locked.")
		rm.RLock()
		println("read locked 2.")
	}()
	time.Sleep(time.Second * 10)
}
/*
read locked 1.
2 read locked.
*/

Once

基於Mutex和一個標誌字段done來實現,在Do()被調用一次之後done被置爲1,之後不在觸發f的執行;

type Once struct {
	m    Mutex
	done uint32
}

// 示例
func Case1() {
	var once sync.Once
	f := func() {
		println("1")
	}
	once.Do(f)
	once.Do(f)
}

WaitGroup

對於併發執行多個任務的場景,WaitGroup可用於等待所有任務執行結束;

// A WaitGroup waits for a collection of goroutines to finish.
// The main goroutine calls Add to set the number of
// goroutines to wait for. Then each of the goroutines
// runs and calls Done when finished. At the same time,
// Wait can be used to block until all goroutines have finished.
//
// A WaitGroup must not be copied after first use.
type WaitGroup struct {
	noCopy noCopy

	// 64-bit value: high 32 bits are counter, low 32 bits are waiter count.
	// 64-bit atomic operations require 64-bit alignment, but 32-bit
	// compilers do not ensure it. So we allocate 12 bytes and then use
	// the aligned 8 bytes in them as state, and the other 4 as storage
	// for the sema.
	state1 [3]uint32
}

// 示例
func Case7() {
	var wg sync.WaitGroup
	wg.Add(2)
	go func() {
		time.Sleep(time.Second)
		println("done 1")
		wg.Done() // wg.Add(-1)
	}()
	go func() {
		println("done 2")
		wg.Done()
	}()
	wg.Wait()
	println("all done.")
}

Cond

Cond實現了類似廣播/單播的同步場景;

廣播:多個goroutine阻塞等待,廣播導致這些goroutine都被喚醒;

單播:多個goroutine阻塞等待,單播導致這些goroutine中的某一個被喚醒(一般是FIFO);

// Cond implements a condition variable, a rendezvous point
// for goroutines waiting for or announcing the occurrence
// of an event.
//
// Each Cond has an associated Locker L (often a *Mutex or *RWMutex),
// which must be held when changing the condition and
// when calling the Wait method.
//
// A Cond must not be copied after first use.
type Cond struct {
	noCopy noCopy

	// L is held while observing or changing the condition
	L Locker

	notify  notifyList
	checker copyChecker
}

// 示例
func Case2() {
	cond := sync.NewCond(&sync.Mutex{})
	go func() {
		for {
      // 必須要先獲取鎖
			cond.L.Lock()
			cond.Wait()
			println(1)
			cond.L.Unlock()
		}
	}()
	go func() {
		for {
			cond.L.Lock()
			cond.Wait()
			println(2)
			cond.L.Unlock()
		}
	}()
	go func() {
		for {
      // 可以不用獲取鎖
			cond.Signal()
			// cond.Broadcast()
			time.Sleep(time.Second)
		}
	}()
	select {}
}

Pool

Pool實現了一個對象池,用於共享一些臨時對象,避免頻繁創建小對象給GC和內存帶來壓力;

結合bytes.Buffer更能實現共享的內存池,應對一般的高併發場景;

// An appropriate use of a Pool is to manage a group of temporary items
// silently shared among and potentially reused by concurrent independent
// clients of a package. Pool provides a way to amortize allocation overhead
// across many clients.
//
// An example of good use of a Pool is in the fmt package, which maintains a
// dynamically-sized store of temporary output buffers. The store scales under
// load (when many goroutines are actively printing) and shrinks when
// quiescent.
//
// On the other hand, a free list maintained as part of a short-lived object is
// not a suitable use for a Pool, since the overhead does not amortize well in
// that scenario. It is more efficient to have such objects implement their own
// free list.
//
// A Pool must not be copied after first use.
type Pool struct {
	noCopy noCopy

	local     unsafe.Pointer // local fixed-size per-P pool, actual type is [P]poolLocal
	localSize uintptr        // size of the local array

	// New optionally specifies a function to generate
	// a value when Get would otherwise return nil.
	// It may not be changed concurrently with calls to Get.
	New func() interface{}
}

// 示例
func Case8() {
	pool := sync.Pool{
		New: func() interface{} {
			return bytes.Buffer{}
		},
	}
	buf := bytes.Buffer{}
	buf.WriteString("abc")
	buf.Reset()
	pool.Put(buf)
	rec := pool.Get().(bytes.Buffer)
	println(rec.String())
}

Map

併發安全的Map

空間換時間, 通過冗餘的兩個數據結構(read、dirty),實現加鎖對性能的影響

動態調整,miss次數多了之後,將dirty數據提升爲read

優先從read讀取、更新、刪除,因爲對read的讀取不需要鎖

TODO

sync/atomic

原子操作,具體實現主要在彙編代碼;

使用LOCK彙編指令通過鎖總線/MESI協議實現緩存刷新;

包括Load,Store,Add,Cas,Swap操作

// atomic.AddXXX()
TEXT runtime∕internal∕atomic·Xadd(SB), NOSPLIT, $0-12
	MOVL	ptr+0(FP), BX
	MOVL	delta+4(FP), AX
	MOVL	AX, CX
	LOCK
	XADDL	AX, 0(BX)
	ADDL	CX, AX
	MOVL	AX, ret+8(FP)
	RET

// atomic.StoreXXX()
TEXT runtime∕internal∕atomic·Store(SB), NOSPLIT, $0-8
	MOVL	ptr+0(FP), BX
	MOVL	val+4(FP), AX
	XCHGL	AX, 0(BX)
	RET

// bool Cas(int32 *val, int32 old, int32 new)
// Atomically:
//	if(*val == old){
//		*val = new;
//		return 1;
//	}else
//		return 0;
TEXT runtime∕internal∕atomic·Cas(SB), NOSPLIT, $0-13
	MOVL	ptr+0(FP), BX
	MOVL	old+4(FP), AX
	MOVL	new+8(FP), CX
	LOCK
	CMPXCHGL	CX, 0(BX)
	SETEQ	ret+12(FP)
	RET
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章