glog包是google推出的一個golang的日誌庫,提供豐富的接口函數,提供不同級別的日誌寫入和日誌文件的輪轉,可將日誌打印到終端或者寫入到指定的路徑文件中。本篇blog主要是包含了如何使用glog以及源代碼中的一些片段筆記。
如何使用glog
創建項目目錄,使用mkdir創建以下的目錄結構
/LearningGo$ tree -L 1
.
├── bin
├── pkg
└── src
3 directories
在src下面創建測試代碼main.go,項目目錄中使用go get github.com/golang/glog下載該庫(操作之前確保GOPATH指向了該項目目錄)
package main
import (
"fmt"
"os"
"flag"
"github.com/golang/glog"
)
func usage(){
fmt.Fprintf(os.Stderr,"Usage: ./Program -stderrthreshold=[INFO|WARNING||ERROR|FATEL] -log_dir=[string]\n")
flag.PrintDefaults()
os.Exit(2)
}
func init(){
flag.Usage=usage
flag.Parse()
}
func main(){
printLines:=100
for i:=0;i<printLines;i++{
glog.Errorf("Error Line:%d\n",i+1)
glog.Infof("Info Line:%d\n",i+1)
glog.Warningf("Warning Line:%d\n",i+1)
}
glog.Flush()
}
上面的代碼中,我們使用了flag.Parse來解析輸入的參數變量,雖然我們此處未對於glog的輸入參數進行處理,但是glog的源碼中init()函數已經完成了這部分工作,新加入了很多的配置參數,Errorf() Infof()以及Warningf()等屬於不同級別的日誌寫入操作函數, Flush()確保了緩存中的數據依次寫入到文件中,編譯執行上述代碼。
$ ./main -log_dir="./logs" -stderrthreshold="ERROR"
...
E1228 09:26:21.750647 28573 main.go:24] Error Line:95
E1228 09:26:21.750668 28573 main.go:24] Error Line:96
E1228 09:26:21.750689 28573 main.go:24] Error Line:97
E1228 09:26:21.750710 28573 main.go:24] Error Line:98
E1228 09:26:21.750734 28573 main.go:24] Error Line:99
E1228 09:26:21.750756 28573 main.go:24] Error Line:100
$ ./main -log_dir="./logs" -stderrthreshold="FATAL"
$ tree logs/ -L 1
logs/
├── main.ERROR -> main.mike-Lenovo-Product.mike.log.ERROR.20161228-092006.28370
├── main.INFO -> main.mike-Lenovo-Product.mike.log.INFO.20161228-092006.28370
├── main.mike-Lenovo-Product.mike.log.ERROR.20161228-092006.28370
├── main.mike-Lenovo-Product.mike.log.INFO.20161228-092006.28370
├── main.mike-Lenovo-Product.mike.log.WARNING.20161228-092006.28370
└── main.WARNING -> main.mike-Lenovo-Product.mike.log.WARNING.20161228-092006.28370
上面的代碼執行過程中我們通過設置log_dir來控制寫入到日誌文件中的函數,而stderrthreshold確保了只有大於或者等於該級別的日誌纔會被輸出到stderr中,也就是標準錯誤輸出中,默認爲ERROR。當設置爲FATAL時候,不會再有任何error信息的輸出。
源碼片段分析
關於文件操作,寫入文件夾的位置設定代碼, 代碼獲得輸入的參數中的log_dir參數值,如果爲空則將os.TempDir()寫入到日誌隊列中,否則傳遞該參數到隊列數組,用於將來日誌的順序寫入。
glog/golog_file.go
var logDir = flag.String("log_dir", "", "If non-empty, write log files in this directory")
var logDirs []string
func createLogDirs() {
if *logDir != "" {
logDirs = append(logDirs, *logDir)
}
logDirs = append(logDirs, os.TempDir())
}
獲得用戶和當前機器的hostname,併產生日誌文件名的相關代碼,其中利用os庫來獲得所需的內容。
func init() {
h, err := os.Hostname()
if err == nil {
host = shortHostname(h)
}
current, err := user.Current()
if err == nil {
userName = current.Username
}
// Sanitize userName since it may contain filepath separators on Windows.
userName = strings.Replace(userName, `\`, "_", -1)
}
// logName returns a new log file name containing tag, with start time t, and
// the name for the symlink for tag.
func logName(tag string, t time.Time) (name, link string) {
name = fmt.Sprintf("%s.%s.%s.log.%s.%04d%02d%02d-%02d%02d%02d.%d",
program,
host,
userName,
tag,
t.Year(),
t.Month(),
t.Day(),
t.Hour(),
t.Minute(),
t.Second(),
pid)
return name, program + "." + tag
}
生成文件的函數,創建日誌文件,這裏使用sync.once來管理創建流程,防止多次執行創建日誌文件夾,後面的則是創建日誌的流程,以及創建日誌的軟連接的過程。
var onceLogDirs sync.Once
// create creates a new log file and returns the file and its filename, which
// contains tag ("INFO", "FATAL", etc.) and t. If the file is created
// successfully, create also attempts to update the symlink for that tag, ignoring
// errors.
func create(tag string, t time.Time) (f *os.File, filename string, err error) {
onceLogDirs.Do(createLogDirs)
if len(logDirs) == 0 {
return nil, "", errors.New("log: no log dirs")
}
name, link := logName(tag, t)
var lastErr error
for _, dir := range logDirs {
fname := filepath.Join(dir, name)
f, err := os.Create(fname)
if err == nil {
symlink := filepath.Join(dir, link)
os.Remove(symlink) // ignore err
os.Symlink(name, symlink) // ignore err
return f, fname, nil
}
lastErr = err
}
return nil, "", fmt.Errorf("log: cannot create log: %v", lastErr)
}
glog/golog.go
我們對外使用的接口函數的具體實現, 這裏主要是要確保緩存數據在寫入的時候保證只寫入一次,而且不會出現丟數據的現象,因此我們在操作對象中(類型爲loggingT)包含一個sync.Mutex鎖對象
const (
infoLog severity = iota
warningLog
errorLog
fatalLog
numSeverity = 4
)
const severityChar = "IWEF"
var severityName = []string{
infoLog: "INFO",
warningLog: "WARNING",
errorLog: "ERROR",
fatalLog: "FATAL",
}
type loggingT struct {
// Boolean flags. Not handled atomically because the flag.Value interface
// does not let us avoid the =true, and that shorthand is necessary for
// compatibility. TODO: does this matter enough to fix? Seems unlikely.
toStderr bool // The -logtostderr flag.
alsoToStderr bool // The -alsologtostderr flag.
// Level flag. Handled atomically.
stderrThreshold severity // The -stderrthreshold flag.
// freeList is a list of byte buffers, maintained under freeListMu.
freeList *buffer
// freeListMu maintains the free list. It is separate from the main mutex
// so buffers can be grabbed and printed to without holding the main lock,
// for better parallelization.
freeListMu sync.Mutex
// mu protects the remaining elements of this structure and is
// used to synchronize logging.
mu sync.Mutex
// file holds writer for each of the log types.
file [numSeverity]flushSyncWriter
// pcs is used in V to avoid an allocation when computing the caller's PC.
pcs [1]uintptr
// vmap is a cache of the V Level for each V() call site, identified by PC.
// It is wiped whenever the vmodule flag changes state.
vmap map[uintptr]Level
// filterLength stores the length of the vmodule filter chain. If greater
// than zero, it means vmodule is enabled. It may be read safely
// using sync.LoadInt32, but is only modified under mu.
filterLength int32
// traceLocation is the state of the -log_backtrace_at flag.
traceLocation traceLocation
// These flags are modified only under lock, although verbosity may be fetched
// safely using atomic.LoadInt32.
vmodule moduleSpec // The state of the -vmodule flag.
verbosity Level // V logging level, the value of the -v flag/
}
var logging loggingT
// Fatal logs to the FATAL, ERROR, WARNING, and INFO logs,
// including a stack trace of all running goroutines, then calls os.Exit(255).
// Arguments are handled in the manner of fmt.Print; a newline is appended if missing.
func Fatal(args ...interface{}) {
logging.print(fatalLog, args...)
}
func (l *loggingT) print(s severity, args ...interface{}) {
l.printDepth(s, 1, args...)
}
func (l *loggingT) printDepth(s severity, depth int, args ...interface{}) {
buf, file, line := l.header(s, depth)
fmt.Fprint(buf, args...)
if buf.Bytes()[buf.Len()-1] != '\n' {
buf.WriteByte('\n')
}
l.output(s, buf, file, line, false)
}
創建定時的刷新數據到磁盤代碼,定時執行,而不是一有數據就執行flush,可以提升數據的執行效率。
const flushInterval = 30 * time.Second
// flushDaemon periodically flushes the log file buffers.
func (l *loggingT) flushDaemon() {
for _ = range time.NewTicker(flushInterval).C {
l.lockAndFlushAll()
}
}
// lockAndFlushAll is like flushAll but locks l.mu first.
func (l *loggingT) lockAndFlushAll() {
l.mu.Lock()
l.flushAll()
l.mu.Unlock()
}
// flushAll flushes all the logs and attempts to "sync" their data to disk.
// l.mu is held.
func (l *loggingT) flushAll() {
// Flush from fatal down, in case there's trouble flushing.
for s := fatalLog; s >= infoLog; s-- {
file := l.file[s]
if file != nil {
file.Flush() // ignore error
file.Sync() // ignore error
}
}
}
其中的Flush和Sync均爲接口flushSyncWriter的函數
// flushSyncWriter is the interface satisfied by logging destinations.
type flushSyncWriter interface {
Flush() error
Sync() error
io.Writer
}
核心代碼裏面包含的一個具有超時機制的Flush操作,防止長期的Flush阻塞.當超過一定時間的時候直接報警到stderr中
// timeoutFlush calls Flush and returns when it completes or after timeout
// elapses, whichever happens first. This is needed because the hooks invoked
// by Flush may deadlock when glog.Fatal is called from a hook that holds
// a lock.
func timeoutFlush(timeout time.Duration) {
done := make(chan bool, 1)
go func() {
Flush() // calls logging.lockAndFlushAll()
done <- true
}()
select {
case <-done:
case <-time.After(timeout):
fmt.Fprintln(os.Stderr, "glog: Flush took longer than", timeout)
}
}
關於日誌的輪詢記錄代碼
func (sb *syncBuffer) Write(p []byte) (n int, err error) {
if sb.nbytes+uint64(len(p)) >= MaxSize {
if err := sb.rotateFile(time.Now()); err != nil {
sb.logger.exit(err)
}
}
n, err = sb.Writer.Write(p)
sb.nbytes += uint64(n)
if err != nil {
sb.logger.exit(err)
}
return
}
// rotateFile closes the syncBuffer's file and starts a new one.
func (sb *syncBuffer) rotateFile(now time.Time) error {
if sb.file != nil {
sb.Flush()
sb.file.Close()
}
var err error
sb.file, _, err = create(severityName[sb.sev], now)
sb.nbytes = 0
if err != nil {
return err
}
sb.Writer = bufio.NewWriterSize(sb.file, bufferSize)
// Write header.
var buf bytes.Buffer
fmt.Fprintf(&buf, "Log file created at: %s\n", now.Format("2006/01/02 15:04:05"))
fmt.Fprintf(&buf, "Running on machine: %s\n", host)
fmt.Fprintf(&buf, "Binary: Built with %s %s for %s/%s\n", runtime.Compiler, runtime.Version(), runtime.GOOS, runtime.GOARCH)
fmt.Fprintf(&buf, "Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg\n")
n, err := sb.file.Write(buf.Bytes())
sb.nbytes += uint64(n)
return err
}
在該程序中,由於日誌操作對象是一個共享的對象,如果我們需要變更裏面的參數的話,需要確保變更的數據立即生效,而不是出現其他的多線程共享對象造成對象寫入競爭的情況發生,這裏使用了atomic庫來完成數據的讀寫。比如下面的代碼中:
// get returns the value of the severity.
func (s *severity) get() severity {
return severity(atomic.LoadInt32((*int32)(s)))
}
// set sets the value of the severity.
func (s *severity) set(val severity) {
atomic.StoreInt32((*int32)(s), int32(val))
}
// Things are consistent now, so enable filtering and verbosity.
// They are enabled in order opposite to that in V.
atomic.StoreInt32(&logging.filterLength, int32(len(filter)))
補充內容sync/atomic
下面的實例程序多個goroutine共存的時候同時對於共享數據進行操作,這裏的加1操作不會導致數據的重複出現,而是依次的不斷加1,雖然使用共享內存但是仍舊可以保證數據不會造成競爭情況的發生。
package main
import (
"sync/atomic"
"time"
"fmt"
)
func main() {
var ops uint64=0
for i:=0;i<50;i++{
go func(){
for {
atomic.AddUint64(&ops, 1)
//fmt.Println("Ops:", ops)
time.Sleep(time.Millisecond)
}
}()
}
time.Sleep(time.Second)
opsFinal:=atomic.LoadUint64(&ops)
fmt.Println("Ops:",opsFinal)
}
最後,歡迎大家訪問我的個人網站jsmean.com,獲取更多個人技術博客內容。