victoriaMetrics中的一些Sao操作

victoriaMetrics中的一些Sao操作

快速獲取當前時間

victoriaMetrics中有一個fasttime庫,用於快速獲取當前的Unix時間,實現其實挺簡單,就是在後臺使用一個goroutine不斷以1s爲週期刷新表示當前時間的變量currentTimestamp,獲取的時候直接原子加載該變量即可。其性能約是time.Now()的8倍。

其核心方式就是將主要任務放到後臺運行,通過一箇中間變量來傳遞運算結果,以此來通過異步的方式提升性能,但需要業務能包容一定的精度偏差。

func init() {
	go func() {
		ticker := time.NewTicker(time.Second)
		defer ticker.Stop()
		for tm := range ticker.C { 
			t := uint64(tm.Unix())
			atomic.StoreUint64(&currentTimestamp, t)
		}
	}()
}

var currentTimestamp = uint64(time.Now().Unix())

// UnixTimestamp returns the current unix timestamp in seconds.
//
// It is faster than time.Now().Unix()
func UnixTimestamp() uint64 {
	return atomic.LoadUint64(&currentTimestamp)
}

計算結構體的哈希值

hashUint64函數中使用xxhash.Sum64計算了結構體Key的哈希值。通過unsafe.Pointer將指針轉換爲*[]byte類型,byte數組的長度爲unsafe.Sizeof(*k)unsafe.Sizeof()返回結構體的字節大小。

如果一個數據爲固定的長度,如h的類型爲uint64,則可以直接指定長度爲8進行轉換,如:bp:=([8]byte)(unsafe.Pointer(&h))

需要注意的是unsafe.Sizeof()返回的是數據結構的大小而不是其指向內容的數據大小,如下返回的slice大小爲24,爲slice首部數據結構SliceHeader的大小,而不是其引用的數據大小(可以使用len獲取slice引用的數據大小)。此外如果結構體中有指針,則轉換成的byte中存儲的也是指針存儲的地址。

slice := []int{1,2,3,4,5,6,7,8,9,10}
fmt.Println(unsafe.Sizeof(slice)) //24
type Key struct {
	Part interface{}
	Offset uint64
}

func (k *Key) hashUint64() uint64 {
	buf := (*[unsafe.Sizeof(*k)]byte)(unsafe.Pointer(k))
	return xxhash.Sum64(buf[:])
}

將字符串添加到已有的[]byte中

使用如下方式即可:

str := "1231445"
arr := []byte{1, 2, 3}
arr = append(arr, str...)

將int64的數組轉換爲byte數組

直接操作了底層的SliceHeader

func int64ToByteSlice(a []int64) (b []byte) {
   sh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
   sh.Data = uintptr(unsafe.Pointer(&a[0]))
   sh.Len = len(a) * int(unsafe.Sizeof(a[0]))
   sh.Cap = sh.Len
   return
}

併發訪問的sync.WaitGroup

併發訪問的sync.WaitGroup的目的是爲了在運行時添加需要等待的goroutine

// WaitGroup wraps sync.WaitGroup and makes safe to call Add/Wait
// from concurrent goroutines.
//
// An additional limitation is that call to Wait prohibits further calls to Add
// until return.
type WaitGroup struct {
	sync.WaitGroup
	mu sync.Mutex
}

// Add registers n additional workers. Add may be called from concurrent goroutines.
func (wg *WaitGroup) Add(n int) {
	wg.mu.Lock()
	wg.WaitGroup.Add(n)
	wg.mu.Unlock()
}

// Wait waits until all the goroutines call Done.
//
// Wait may be called from concurrent goroutines.
//
// Further calls to Add are blocked until return from Wait.
func (wg *WaitGroup) Wait() {
	wg.mu.Lock()
	wg.WaitGroup.Wait()
	wg.mu.Unlock()
}

// WaitAndBlock waits until all the goroutines call Done and then prevents
// from new goroutines calling Add.
//
// Further calls to Add are always blocked. This is useful for graceful shutdown
// when other goroutines calling Add must be stopped.
//
// wg cannot be used after this call.
func (wg *WaitGroup) WaitAndBlock() {
	wg.mu.Lock()
	wg.WaitGroup.Wait()

	// Do not unlock wg.mu, so other goroutines calling Add are blocked.
}

// There is no need in wrapping WaitGroup.Done, since it is already goroutine-safe.

時間池

高頻次創建timer會消耗一定的性能,爲了減少某些情況下的性能損耗,可以使用sync.Pool來回收利用創建的timer

// Get returns a timer for the given duration d from the pool.
//
// Return back the timer to the pool with Put.
func Get(d time.Duration) *time.Timer {
	if v := timerPool.Get(); v != nil {
		t := v.(*time.Timer)
		if t.Reset(d) {
			logger.Panicf("BUG: active timer trapped to the pool!")
		}
		return t
	}
	return time.NewTimer(d)
}

// Put returns t to the pool.
//
// t cannot be accessed after returning to the pool.
func Put(t *time.Timer) {
	if !t.Stop() {
		// Drain t.C if it wasn't obtained by the caller yet.
		select {
		case <-t.C:
		default:
		}
	}
	timerPool.Put(t)
}

var timerPool sync.Pool

訪問限速

victoriaMetrics的vminsert作爲vmagentvmstorage之間的組件,接收vmagent的流量並將其轉發到vmstorage。在vmstorage卡死、處理過慢或下線的情況下,有可能會導致無法轉發流量,進而造成vminsert CPU和內存飆升,造成組件故障。爲了防止這種情況,vminsert使用了限速器,當接收到的流量激增時,可以在犧牲一部分數據的情況下保證系統的穩定性。

victoriaMetrics的源碼中對限速器有如下描述:

Limit the number of conurrent f calls in order to prevent from excess memory usage and CPU thrashing

限速器使用了兩個參數:maxConcurrentInsertsmaxQueueDuration,前者給出了突發情況下可以處理的最大請求數,後者給出了某個請求的最大超時時間。需要注意的是Do(f func() error)是異步執行的,而ch又是全局的,因此會異步等待其他請求釋放資源(struct{})。

可以看到限速器使用了指標來指示當前的限速狀態。同時使用cgroup.AvailableCPUs()*4 (即runtime.GOMAXPROCS(-1)*4)來設置默認的maxConcurrentInserts長度。

當該限速器用在處理如http請求時,該限速器並不能限制底層上送的請求,其限制的是對請求的處理。在高流量業務處理中,這也是最消耗內存的地方,通常包含數據讀取、內存申請拷貝等。底層的數據受/proc/sys/net/core/somaxconn和socket緩存區的限制。

var (
	maxConcurrentInserts = flag.Int("maxConcurrentInserts", cgroup.AvailableCPUs()*4, "The maximum number of concurrent inserts. Default value should work for most cases, "+
		"since it minimizes the overhead for concurrent inserts. This option is tigthly coupled with -insert.maxQueueDuration")
	maxQueueDuration = flag.Duration("insert.maxQueueDuration", time.Minute, "The maximum duration for waiting in the queue for insert requests due to -maxConcurrentInserts")
)

// ch is the channel for limiting concurrent calls to Do.
var ch chan struct{}

// Init initializes concurrencylimiter.
//
// Init must be called after flag.Parse call.
func Init() {
	ch = make(chan struct{}, *maxConcurrentInserts) //初始化limiter,最大突發並行請求量爲maxConcurrentInserts
}

// Do calls f with the limited concurrency.
func Do(f func() error) error {
	// Limit the number of conurrent f calls in order to prevent from excess
	// memory usage and CPU thrashing.
	select {
	case ch <- struct{}{}: //在channel中添加一個元素,表示開始處理一個請求
		err := f() //阻塞等大請求處理結束
		<-ch //請求處理完之後釋放channel中的一個元素,釋放出的空間可以用於處理下一個請求
		return err
	default:
	}

    //如果當前達到處理上限maxConcurrentInserts,則需要等到其他Do(f func() error)釋放資源。
	// All the workers are busy.
	// Sleep for up to *maxQueueDuration.
	concurrencyLimitReached.Inc()
	t := timerpool.Get(*maxQueueDuration) //獲取一個timer,設置等待超時時間爲 maxQueueDuration
	select {
	case ch <- struct{}{}: //在maxQueueDuration時間內等待其他請求釋放資源,如果獲取到資源,則回收timer,繼續處理
		timerpool.Put(t)
		err := f()
		<-
		return err
	case <-t.C: //在maxQueueDuration時間內沒有獲取到資源,定時器超時後回收timer,丟棄請求並返回錯誤信息
		timerpool.Put(t)
		concurrencyLimitTimeout.Inc()
		return &httpserver.ErrorWithStatusCode{
			Err: fmt.Errorf("cannot handle more than %d concurrent inserts during %s; possible solutions: "+
				"increase `-insert.maxQueueDuration`, increase `-maxConcurrentInserts`, increase server capacity", *maxConcurrentInserts, *maxQueueDuration),
			StatusCode: http.StatusServiceUnavailable,
		}
	}
}

var (
	concurrencyLimitReached = metrics.NewCounter(`vm_concurrent_insert_limit_reached_total`)
	concurrencyLimitTimeout = metrics.NewCounter(`vm_concurrent_insert_limit_timeout_total`)

	_ = metrics.NewGauge(`vm_concurrent_insert_capacity`, func() float64 {
		return float64(cap(ch))
	})
	_ = metrics.NewGauge(`vm_concurrent_insert_current`, func() float64 {
		return float64(len(ch))
	})
)

優先級控制

victoriaMetrics的pacelimiter庫實現了優先級控制。主要方法由IncDecWaitIfNeeded。低優先級任務需要調用WaitIfNeeded方法,如果此時有高優先級任務(調用Inc方法),則低優先級任務需要等待高優先級任務結束(調用Dec方法)之後才能繼續執行。

// PaceLimiter throttles WaitIfNeeded callers while the number of Inc calls is bigger than the number of Dec calls.
//
// It is expected that Inc is called before performing high-priority work,
// while Dec is called when the work is done.
// WaitIfNeeded must be called inside the work which must be throttled (i.e. lower-priority work).
// It may be called in the loop before performing a part of low-priority work.
type PaceLimiter struct {
	mu          sync.Mutex
	cond        *sync.Cond
	delaysTotal uint64
	n           int32
}

// New returns pace limiter that throttles WaitIfNeeded callers while the number of Inc calls is bigger than the number of Dec calls.
func New() *PaceLimiter {
	var pl PaceLimiter
	pl.cond = sync.NewCond(&pl.mu)
	return &pl
}

// Inc increments pl.
func (pl *PaceLimiter) Inc() {
	atomic.AddInt32(&pl.n, 1)
}

// Dec decrements pl.
func (pl *PaceLimiter) Dec() {
	if atomic.AddInt32(&pl.n, -1) == 0 {
		// Wake up all the goroutines blocked in WaitIfNeeded,
		// since the number of Dec calls equals the number of Inc calls.
		pl.cond.Broadcast()
	}
}

// WaitIfNeeded blocks while the number of Inc calls is bigger than the number of Dec calls.
func (pl *PaceLimiter) WaitIfNeeded() {
	if atomic.LoadInt32(&pl.n) <= 0 {
		// Fast path - there is no need in lock.
		return
	}
	// Slow path - wait until Dec is called.
	pl.mu.Lock()
	for atomic.LoadInt32(&pl.n) > 0 {
		pl.delaysTotal++
		pl.cond.Wait()
	}
	pl.mu.Unlock()
}

// DelaysTotal returns the number of delays inside WaitIfNeeded.
func (pl *PaceLimiter) DelaysTotal() uint64 {
	pl.mu.Lock()
	n := pl.delaysTotal
	pl.mu.Unlock()
	return n
}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章